NewsCustom SearchContact

we cannot tell you any final truth about data architecture.
There is no final truth in data architecture.
we will help you to find it ...

... or Google does

 Class:   Search:   for:      |<  <  > 
NegotiCon 2023
    by Ludek Bob Jankovsky, 18-Mar-2023 (NEWS)
A good data architecture is not negotiable in the sense that it is essential for the success of any organization that deals with data. However, the specifics of the data architecture can be negotiated and should be open to discussion in order to ensure that it meets the needs of the organization and its stakeholders. International Negotiation Conference This year’s topic: Harnessing the power of relationships, trust and teamwork in negotiation. Prague 31st March – 1st April 2023 ...more
Ale or Lager, you have to reconcile your Real-time Operational Data Store anyway
    by Ludek Bob Jankovsky, 11-Oct-2019 (ETL PATTERNS)
After few lagers I will return to common Real time Operational Data Store topic. You can brew Lager, you can brew Ale, you can brew Stout, but still you will have to taste if you have braved it well. You will have to consider possibility of consistency loss in your system and you have to be ready for that.
The first inconsistency is the initial one, the state of the system before initial load. Then there could be consistency breaches, down-times of particular services etc. Never get too pampered by good taste of the first batch, because the second one could turn sour. And better than end up with sour faces, lets be ready for that. Lets involve consistency tests, consistency repairs, hard repairs to our brewing process.
This article follows previous three in the series: ...more
Lager-wise Real-time Operational data store and Elasticity
    by Ludek Bob Jankovsky, 30-Sep-2019 (ETL PATTERNS)
In last two articles about beer (well with the taste of some boring Data processing and Operational data stores) and especially the last one sliding on surface of the Lager brewing (Lager-wise Real-time ODS ETL process), we can go deeper in aspects of optimization and elasticity of the process.
At the very start I was getting familiar with the real-time processing I imagined I can solve only operational rows based changes and all these bulk operations such as EOD processing, EOM processing etc. would be solved by some other way. In the rush of real-time problems we just postponed that somewhere to the future. And in that "future" what is now past we learnt that there is no one and nothing what will solve that for us. There is no synchronization mechanism allowing to stop real-time processing, switch it to bulk one and then return to the real-time seamlessly. First there is the synchronization and process management problem, but more important is that not all bulk operations are well mapped and planned in the source system. We cannot say to users: "I am sorry, we do not have up to date data now, because the system XY does some bulk operation we cannot cope with real-time."
Our real-time solution should be ready even for bulks.
Just to be clear.  ...more
Well filtered lager, Lager-wise Real time Operational data store
    by Ludek Bob Jankovsky, 26-Sep-2019 (ETL PATTERNS)
After the last article about beer and operational data stores and the audacity it got (understand some people have read it and few people even liked it) I got encouraged to write a sequel.
Some readers were slightly disappointed why do I mess some boring Data issues into relatively cute article about beer, but most people appreciated it.
We will focus on Lager-wise Real-time Operational data store now. As I have written Lager-wise ODS are much more complicated and the latency is not as real-time as expected. Ale drinkers sometimes wonder what is so complicated with lager and why my glass of lager is still half full when their is still half empty. Despite Lager-wise Operational data stores has such disadvantages and it is in general not easy to brew good lager, I feel that type will soon prevail, especially in the Business intelligence world where we tend not to bother legacy and source layer application owners by extensive integration tasks and the agile approach allows to do that efficiently. I am partially guilty. The Metadata-driven approach and ETL patterns helps that - from the point of view of ale drinkers - silly - approach feasible and even efficient. Silly things happen sometimes. Some many times.

So it is about Lager now!
About Real-time Operational Data Store and Beer brewing
    by Ludek Bob Jankovsky, 14-Jul-2019 (ETL PATTERNS)
Speaking about ODS nowadays we usually consider the real-time or nearly real-time one. That is what makes it operational by the way. There were concepts using that term for exaggerated staging areas of Data warehouses used for early access to D+1 data, in the past times of Business intelligence patterns, but that was just misusing of the term for something else or reducing the meaning to an operational reporting resource. Later, following issues made these concepts wrong:

  • Using EDW ETL as a source or driving structure collided with the requirement of up-to date information.
  • Ad-hoc requirements for operational reporting messed with data quality and design quality requirements of flawless operational stores.
  • Data Lake appeared to be a better solution for operational reporting.
  • The requirement of high quality high available and real-time or nearly real-time data caused the new Operational data stores are slimmer but much better monitored and operated.

So, speaking about ODS we mean the real-time one,
we speak about beer, not about lemonade,
just to be clear.
Intra-field deduplication of list by regular expression
    by Ludek Bob Jankovsky, 23-May-2018 (ORACLE STUFFS)
Sometimes you need to deduplicate repeating patterns from a text string. Our case has been inspired by list of values based on LISTAGG operation on source with duplicities. Of course, the best practice is to deduplicate the source before the LISTAGG operation, but what if we have just the result with mentioned duplicities? ...more
Reading PIPELINED function conntaining DML in a query, ORA-14551 - small cheat
    by Ludek Bob Jankovsky, 06-Apr-2018 (ORACLE STUFFS)
When a PIPELINED function contains DML operaions it is usually impossible flat select from it. It works well in INSERT-SELECT operation but hardly in a plain select.  ...more
ORA-01489: result of string concatenation is too long
    by Ludek Bob Jankovsky, 26-Feb-2018 (ORACLE STUFFS)
That happens when we want to aggregate/concatenate text fields and the summary length exceeds the 4000 limit.
There are various scenarios what do you want to achieve:
  • cut the result to 4000 characters
  • return as a CLOB
  • more complex tasks such as cut particular long parts of the text.

  • Following solution solves the first case, i.e. return first 4000 characters in the case of exceeding.

    Original statement: ...more
    SIMPLELOAD - Elementary 'last phase' solution for ETL
        by Ludek Bob Jankovsky, 31-May-2017 (ETL PATTERNS)
    Despite on poorest of the solution in comparison to true metadata driven ETL, I often meet transformations realized as views with almost the same format as the target table, followed by the pattern based implementation of the final stage - loading the transformed source data into target table, considering current data and maintaining history.
    I have to admit the smart approach is proper for small and quick win solutions.
    For easy implementation I scripted a skeleton of the last phase load element based on data dictionary information, supporting the most common patterns. ...more
    ETL and the ORA-01502: index *** or partition of such index is in unusable state error, ORA-14063
        by Ludek Bob Jankovsky, 02-Nov-2016 (ORACLE STUFFS)
    Especially in huge ETL processes we often increase efficiency of the process by setting some indexes on the target table UNUSABLE. That way we speed-up active operations such as inserts and merges into the table with unusable indexes, but the ORA-01502 error can back-stab us in some phases of the process.
    We should be worried about reading operations what we can slow down (sometimes even freeze) by lack of the unusable index, but most of ETL operations are provided in an exclusive time period. By the other hand many concurrent ETL processes and even most overall analytical queries do not need indexes for its work.
    However, time after time it happens, the error mentioned above appears in the process. ...more
    Simpletask is under the UPL license now
        by Ludek Bob Jankovsky, 29-Oct-2016 (HEAP WORKFLOW)
    To make all the license stuff clearer in the case of the Simpletask solution, the UPL - Universal permissive license has been applied for all parts of the software and documentation published on this site under the Simpletask name.
    Differently from plain Copyleft license it allows easier incorporation of the software and parts of the software into Larger works without obligation of using the same Copyleft license for these larger works.  ...more
    APEX - Downloading files
        by Ludek Bob Jankovsky, 11-Jan-2016 (ORACLE STUFFS)
    Using APplication EXpress as a simple GUI cover for Matadata based generators of XML and SQL files led me to task to download these files.
    Browsing over various helps I decided to add another one to the rubbish.
    There are two basic scenarios:
    1. We want to download result of PL/SQL method (function) directly to client machine, or
    2. We want to download generated codes from stage - some storage of files prepared for download.  ...more
    Real-time ETL patterns and how to piss to pit
        by Ludek Bob Jankovsky, 16-Mar-2014 (ETL PATTERNS)
    Recently I published an article about real-time ETL. Designing it we hit on several challenges and various different approaches. To be certain, we speak about Real-time ETL realizing Data integration of sources, not a Service integration. We have to react on smallest changes in source data and propagate the changes to transformed target Data structure. Advantages and disadvantages I described in previous article. ...more
    Edges of Real-time ETL
        by Ludek Bob Jankovsky, 11-Jan-2014 (ETL PATTERNS)
    Nowadays Real-time and message based ETL take more significant place in IT infrastructure. Differently from Messaging based operational data, in the Real-time approach target data are refreshed based on changes of source data in source systems. That uses means what was earlier often used for replication of data. Differently from replication, realtime ETL is hardly 1:1 but the transformation takes place in the process. Real-time ETL brings many limitation in comparison to standard batch ETL. ...more
    XML Import and export utility for SIMPLETASK
        by Ludek Bob Jankovsky, 25-Aug-2013 (HEAP WORKFLOW)
    According to principle of maintaining Logical process separately from physical implementation in SIMPLETASK the Export / Import format is necessary.
    Following Add-on package contains API for import XML format. The primary intention is to use the feature for importing metadata generated from Logical process model, but it also can be used to archive physical metadata or to transfer them between environments. ...more
    APEX GUI for SIMPLETASK heap workflow
        by Ludek Bob Jankovsky, 10-Jul-2013 (HEAP WORKFLOW)
    Simpletask Heap Workflow is a simple Oracle based solution for ETL process management based on Oracle DBMS_SCHEDULER and Heap Workflow principles. Now we present APEX GUI, allowing to manipulate metadata and operate ETL easier way than using the PL/SQL based API only. ...more
    Heap Workflow
        by Ludek Bob Jankovsky, 29-Jun-2013 (HEAP WORKFLOW)
    Long ago when I started to design ETL systems I realized, that robust ETL process management is as important as any other pillar of ETL approach.
    Most current workflow systems are focused to easy visualization in structured form.
    Structured workflow systems are good for simple process management. I have never seen ETL system with just a little focus on robustness and performance. For that you need a fine grain heap based workflow system allowing to use operational research methods.

    What's different?
    When you put task into structured workflow, you wire it in certain place to be exacuted at. In heap workflow you put all task in one heap and define constraints of their executions.
    The constraints are of following types:
    • Predecessor dependency (solves dependencies comming from referential integrity, data rediness etc.)
    • Resource consumption (solves system resource balancing, exclusivity of filling certain segments and finally using pending resource consumption it can solve temporarily disability of certain objects)
    • Condition function (solves additional timing requirement, branches, skips etc.)
    Use data type arithmetic to convert from 'Timestamp' world to 'Date' world and opposite way
        by Ludek Bob Jankovsky, 09-Mar-2013 (ORACLE STUFFS)
    There is not straight conversion between the TIMESTAMP and DATE datatypes. Easy way to 'misuse' the standard data type arithmetic to do particular conversions in our queries.
    Following article will briefly explain the ways. ...more
    Aging table - drop column of partitioned table with compressed old partitions
        by Ludek Bob Jankovsky, 11-Nov-2012 (ORACLE STUFFS)
    Using fast aging data (fact tables, logs) you will have to consider some kind of aging strategies like partitioning by time slices, maintaining old partitions compressing them, moving them to slower storage or dropping them at the end. One of factor considered during decision COMPRESS - NO COMPRESS is potential problem of dropping column from table. That is impossible while table or some partitions are compressed. Uncompress all the partitions of huge aged table can make space difficulties. Following solution avoid the problem and offer robust way how to drop column from aged table with compressed partitions. ...more
    Usage of SUBQUERY_PRUNING undocumented hint
        by Ludek Bob Jankovsky, 20-Sep-2011 (ORACLE STUFFS)
    There are cases when you want to merge data from small table to large one, where partitioning of target table allows to merge data just into last few partitions. Anyway, to know what partitions should be used you need query didstinst partition keys from source table. Oprimizer usually works itself, but if it doesn't, following hint can help. No more words, just an example: ...more
    Partitioning sreategy patterns - principal element of Data architecture
        by Ludek Bob Jankovsky, 19-Aug-2011 (ETL PATTERNS)
    One of basic chapters in data architecture is a decision about partitioning strategy for particular data layers, table stereotypes and ETL patterns.
    Following article will describe basic drivers influencing decisions about partitioning strategy. Details on particular patterns will be linked along.  ...more
    Equipartitioning - solution for very large data transformations, but not only for them
        by Ludek Bob Jankovsky, 16-Aug-2011 (ORACLE STUFFS)
    There are situations, you want to join two huge tables unselectively for bulk data operartion.
    Using NESTED LOOP operation is not effective for big number of rows.
    Using HASH JOIN operation is usually effective in such cases, but it has limits when nomber of rows exceeds tens of millions.
    EQUIPARTITIONING is a way allowing to make huge bulk operations on hundreds of millions of rows. ...more
    Ridiculous (well just for nonparticipants) behaviour of LEFT JOIN in Oracle 11g
        by Ludek Bob Jankovsky, 12-Apr-2011 (ORACLE STUFFS)
    There is great feature in Oracle 11g, named Optimizer join elimination. It's designed to reduce effort of database following way:
    When there is a Left outer joined table in query, and no column from the table used in result set, optimizer simplifies query and doesn't access the outer joined table - as not necessary.
    It sounds great, doesn't it?
    Well, it would sound well, if there is no bug.  ...more
    Rigid Exchange Partition Data Acquisition approach
        by Ludek Bob Jankovsky, 05-Feb-2010 (ETL PATTERNS)
    There are several ways to support consistency of data warehouses and data marts within business hoursperiod. In standard simple approaches it usually suppresses loading time window to really short period between readiness of source data and required readiness of target data. Following approach is just another way allowing keep all data in the mart consistent at the moment, anyway I consider it as an interesting stuff. ...more
    Unpivot in the Oracle 10g
        by Ludek Bob Jankovsky, 20-Apr-2009 (ORACLE STUFFS)
    Unpivot operation available in Oracle 11g raised wave of interest about the same functionality in the 10g version of Oracle. Unfortunately there is just a workaround available. ...more
    TBD STATUS indexing in Oracle
        by Ludek Bob Jankovsky, 08-Mar-2009 (ORACLE STUFFS)
    It is not about a special type of index. That is just a technique of indexing columns indicating status TO BE DONE. What is common for such cases:
    • One or several statuses marking few TO BE PROCESSED records.
    • Majority of completed records.
    • Update of field is supposed.
    What type of indexing of the field is the best?  ...more
    Adding new mandatory columns with specified default 11g vs. 10g
        by Ludek Bob Jankovsky, 28-Feb-2009 (ORACLE STUFFS)
    I am sure you have met the issue sometimes. You are adding new mandatory column into very large table. Oracle starts update all rows with default value of the column - like it would never finish. It takes huge amount of time in the data definition phase of migration scripts - not much agreeable issue. Lets see how it has been solved in the 10g (the same as 9i) version of Oracle and what is different in Oracle 11g.  ...more
    How to avoid wasting sequences in differential merge pattern
        by Ludek Bob Jankovsky, 25-Feb-2009 (ETL PATTERNS)
    In Oracle 10g, 11g getting values from sequences during MERGE behaves a bit strange. Despite you define getting value of sequence directly in a WHEN NOT MATCHED clause, sequence is acquired and wasted even if record is matched. According to docomented behaviour sequence will be acguired for every source record. It is a bad message because of offer usage of mentioned construction in DIFF_MERGE pattern.
    Lets see what to do with it. ...more
    Tidy your automatically maintained time partition names in 11g
        by Ludek Bob Jankovsky, 25-Feb-2009 (ORACLE STUFFS)
    Oracle 11g have introduced new functionality of time interval partitions management. One of disadvantages of new approach are messy ssytem generated names of partitions. It would be pity not to use the new sofisticated approach just because of such issue. Better include simple solition to rename generated partitions based on its metadata definition. ...more
    Intra-field list pivot and unpivot
        by Ludek Bob Jankovsky, 17-Feb-2009 (ORACLE STUFFS)
    Aside standard unpivot tasks (we create n rows using values from n columns) following unpivot task appears:
    One record with coma separated values in a column causes creating several rows, one for each part of the coma separated field.  ...more
    Articles: 1 .. 30   first  prev  next 
    All Right Reserved © 2007, Designed by Bob Jankovsky