all, we know data vault is ever evolving thing, where we keep adding new tables to it for an rapid delivery, how will be make sure the testing is performed as expected? do we come up with regression? what is the framework or best practices?
Parameterise this: the_data_must_flow/data-auto-testing at master · PatrickCuba/the_data_must_flow · GitHub
Parameterise? can this be included into the end to end ETL/script/pipelines framework? What are key KPI/metrics this is capturing?