Hello,
I am quite a beginner in DV, but i have sevearl years of XP in dimensional modeling.
I have some questions/challenges raised by the DV itself and the new practices that appeared with modern cloud datawarehouses, which today allow us to do things that we were not able to do in the past.
Here’s my situation :
i am modeling the Restaurant entity.
In the company, there is not viable business key for this. restaurants have nothing we can 100% identify them uniquely. Hence we have to use the IDs provided by the source systems.
We have 5 systems managing restaurants (NextGen; Legacy; Salesforce; Fenix: TA), each have their of PK for a restaurant but they also all reference the PK of another system.
You will find in my post the data model for the Restaurant table of each system (Salesforce has 2, leads and converted accounts).
All source tables are snapshotted. So the snapshot date is also part of the primary key of each table in that situation. (those are source tables ingested in our datalake).
Users want to have the image of a restaurant at a point in time.
They also want to have that vision from a source system perspective (so we have to be able to provide source IDs at all times).
My main Challenge : The matches between ids of systems changes in time. And let’s concider that we want to keep the image of the match at that point in time. The last version of the FKs is not necessarily the right one, we better have it as it was at the date we are looking.
How would you suggest to model this in DV way ?
I’d like to be able to have a simple HUB_Restaurant table and satellites with all the attributes coming from each source system.
I have several ways in mind but i don’y have enough XP to guess into what kind of problem i may stumble in the future.
You need consistent keys full stop.
There must be a business that represents your restaurant uniquely and forever.
How are these businesses represented? Surely by an entity like a company registration number
Use the snapshot date as metadata for each loaddate.
Use one hub-restaurant with bkcc or hub-nextgen, hub-legacy etc with no kbcc. I prefer the later option because it wont mix different “styles” of identification in one hub.
Create a sat for nextgen-restaurant (name), bk hub-nextgen. Create a rts for hub-nextgen.
Create sat for legacy-restaurant(name), bk hub-legacy. Create a link between hub-legacy and hub-nextgen. Create a rts for the link.
Do the same kind of modelling for the remaning objects.
Your model will then be able to handle data/relations/bk that changes over time. You will have 5 different autonomous processes for loading your data. You will not lose any data with this model.
You can later “clean” up the mess in BV.
Hope this helps!
Stop pretending that you dont understand.
AppliedDate=businessdate=snapshotdate. If you dont want data redudancy then assign the snapshotdate to the loaddate. Loaddate is still the time when the data enters the dv.
Use bkcc or different hubs. If you want messy bks in your hubs, use bkcc. If you want less messier hubs but more hubs, use more hubs. By now you know my preference.
Maybe you can provide us with a suggestion of a model that answers the original topic in this thread?
Kind regards!
Thank you all for your answers, this feed my reflexion a lot.
I haven’t fully cracked it yet.
They do not exist, unfortunately. We do not capture anything that could play that role (for instance company registration numbers, which could, in theory, be a good idea).
Those businesses are represented by their name+ their adress.
All free text entered in our apps, so data quality is very bad.
So no actual business key available.
Would this meen that DV does not work here ?
Mapping between Contexts. No overlap exists between contexts and no way to integrate the same business objects. No issue will exist in the integration to a common hub table however you will not be able to relate business objects between contexts (if they do exist) unless a mapping is supplied by either context (source-domain) or the solution is supported by master data management. Here you will likely see the use of a “same-as link” table for the mapping between contexts to solve equi-joins that overlap contexts — in effect, a context map.
This, from the article linked in first answer is a trail.
I know that and i completely agree with you.
Unfortunately, it is not in our near future plans to invest in an MDM infra, so in the meantime i have to find a solution.
Sorry to hear that.
I’m sure there’s a dilbert cartoon just for that situation.
You might need to build a small match/merge MDM-lite with a basic interface just for this in python or something, then use that as the source for your data vault.
Yes! This is exatly what we do; we setup an MDM lite with mapping tables that we use to map manually the source values to the ones we want to report on (dimension tables values). It works pretty fine, but generates some overload on the BI team.
Now i’m gonna find a way to integrate this in DV. Or maybe not DV.