Data Modelling tool vendors are letting us down

In his latest posting on the ERWIN Modelling Expert blogs, Malcolm Chisholm describes a Use Case for data modelling that is critical to the success of Information Management and Service-Oriented Architectures, the management of common message formats in an Enterprise Service Bus.

I agree wholeheartedly with Malcolm’s comments.  A big stumbling block so far is the poor integration with XML Schemas (XSDs) provided by most data modelling tools.  For successful management of XML, these tools must provide a dedicated XML modelling facility, completely integrated with the generation and traceability facilities provided, allowing us to forward-engineer XSDs in a controlled, repeatable fashion.  Most data modelling tools do allow us to generate XSDs, and keep the settings we used to generate them, but keep no record of what we’ve generated and when.  The saved settings allow us to repeat the process, but there is no traceability, nothing we can use to provide impact analysis.
It’s analogous to generating Oracle database schemas directly from Logical Data Models, without using a dedicated Physical Data Model to describe the schema.  If we did that, we’d never be able to tell where our data is in databases, and it would be difficult to tell if the schema has been tinkered with since it was generated.  As Yoda might say, “Back in the Dark Ages, we would be”.
Some UML tools use a special profile to model XML schemas, and one mainstream data modelling tool I know of has a dedicated XML Model, which can be generated from a relational Physical Data Model, and will soon allow XML models to be generated from Conceptual and Logical Data Models.  Similar capabilities are provided by at least one mainstream metadata repository product.  Where are the rest of the data modelling tool vendors?  They’re letting Information Management down.

Advertisements