Different satellite load pattern for immutable data?

Hi all,

I was thinking today about entities that are largely immutable. We have a number of scenarios where once an entity is defined in our source it’s never changed. Those familiar with dimensional models will have many examples popping off the top of their heads.

We currently use the same load pattern for all of our satellite tables, is it worth switching to a different query for immutable records that may be able to be more optimized without the extra leg work of calculating diffs every time, or would you guys recommend using the same pattern everywhere?

I’d follow up with a word about QA as obviously the immutable records sat would need additional testing to catch any source data issues in the case that a record is in fact updated when it shouldn’t be.

All the best!
Frankie

Non-historised links and satellites? We don’t add HashDiffs to those because by definition the records being loaded are always new, so why bother calculating a record digest.

Isn’t this just adding even more compute to our use case? We have business keys for the each instance of the entity we just want to skip the diff check. Wouldn’t adding the NHL and the Satellite still lead to the same optmization problem on a technical level? In which the satellite loading pattern is still going to check for differences in the history when that’s not really a required step.

We’re using a dbt package for templating dv tables so if the sat off an NHL has a different load pattern than that of a hub then we’d have to shuffle some things around.

How? I don’t have to calculate a digest value, I don’t do a hashdiff comparison. Hashing is expensive.
Building a NHL means you don’t have a child-satellite? No? What am I missing in your problem statement?
I don’t believe AutomateDV has a NHL pattern? Or does it now?

Let me get back to you on this, Maybe I need to brush up on my NHL knowledge.