Sharing data among applications in a complex corporate IT environment is unfortunately often reduced to sharing a common database or in some cases a cube. New data sharing approaches started to emerge with SOA practices, leading to managerial difficulties in defining who’s in charge among concurrent projects, and of implementing the service layer logic to retrieve or update the data since standard SOAP web services are often designed upon very application-specific session management.
.NET 4.0 introduces a lot of industrialization tools that make the idea of an application independent SOA data repository reachable. Three features are necessary to make SOA data services flexible and non-intrusive, in other words friendly to its application customers:
1. Organized Hierarchical Data Browsing
Because of our instinctive use of operating systems' hierarchical folders, we often forget that hierarchical classifications of data are merely a custom view on a repository, and that this view depends on application-centric logic. Providing several views to navigate the same data would create some flexibility towards new applications needs: time-based classification, author classification, subject classification, customer classification, etc.
The URL syntax embodies a navigation hierarchy and can offer an intuitive way to browse data through a custom hierarchy; URL-based navigation routes have been popularized in REST web applications thanks to ASP.NET MVC and are also supported in web services by the oData protocol and in the WCF data services layer (.NET 3.5 add-in or natively in .NET 4.0).
HOW-TO: Creating an oData Service dealing with application authorizations
This tutorial presents the first steps to set up an oData service based on an existing EDMX:
Create a new WCF Service application in Visual Studio 2010
Add a new ADO.NET Entity Data Model to the solution
Create a new model in the Entity Designer. The database will be generated from the model in our case, although in case of a legacy database the model can also be generated from the database. Our model has a composite pattern (AbstractApplicationItem self reference).
Create an oData service linked to the previous model
Change the InitializeService method to define which operations are allowed on the model
Test the model using JSON requests
http://server/HierarchicalBrowser.svc/Applications => List all available applications in the system. http://server/HierarchicalBrowser.svc/Application?$filter=Name eq 'Central'/ApplicationItems => List all application items for the application name 'Central'
2. Model Abstraction, Services Virtualization
Designing data models with an ORM approach can be based on the database schema model or the ORM entity model; one of the benefits of using an ORM is fixing the impedance mismatch by using an automatic mapping between object-oriented design patterns (inheritance, class associations) and their database implementation. Object-oriented framework architects are more flexible towards requirement changes than database architects: they are entitled to know how to abstract generic use cases through inheritance, how to abstract flow messages schemas through service factories, how to make the application host open to new business cases through IoC (Inversion of Control), etc.
Anticipating requirements changes should be one of the core competences of a good object oriented architect; those skills are often out of scope for a database architect, who is focused on performance optimization. For this reason, in a SOA data repository data design should rely on the entity model rather than on its database schema implementation.
Model changes in an SOA context should not force dependent applications to update their code; with web service versioning each application can choose the appropriate time to update their model. Microsoft Managed Services proposes an infrastructure where web services are “virtualized”, i.e. the web service model is separated from its implementation, the same way the entity model is also separated from the database instance in ORMs. Microsoft Managed Services supports web service versioning as well as the REST protocol.
3.Framework Extensibility
Abstracting the model from the implementation provides several benefits: the model becomes the host for additional metadata, describing custom behaviors. Adding features to existing code through metadata has been largely used in AOP (Abstract Oriented Programming), where reflection was used to add transversal features to the code, such as field validation and logging. The benefit of AOP is code independence: the transversal features are automatically available to any new object added to the code.
However metadata attached to the model cannot be used through reflection; the entity modeler is nothing more than a DSL (Domain Specific Language) and therefore metadata information is used inside a code generation framework, using T4 technologies in the case of .NET 4.0.
Using T4 technologies on the entity model, the following features can be added to the data repository services, automatically available for any new entity in the model in a fully transversal (orthogonal) approach. An example of a transversal auditing feature is provided below.
HOW-TO: Create a transversal Auditing service with Update / Delete / Insert operations tracking
This tutorial explains how to update an Entity Framework and RIA Services application to log all entity data changes in external auditing tables. Auditing referential changes in external tables is a need that has been correctly answered by the Envers framework in the Hibernate/Java community.
Add a new T4 file in an existing project containing a .edmx file.The new .tt file should take as the original .edmx file as an input (OriginalModel.edmx) and generate a derived .edmx (extension)
Generate, for each Entity in the original .edmx, a derived Entity in the generated .edmx
Use the online T4 templates available in Online Templates on the newly created edmx file.
In order to instantiate new web services instances on the server side without any client call, create a dummy service provider.
In the original RIA Services file generated from the initial edmx file, override PersistChangeSet to call the Auditing Service.
Add the following method to the RIA Auditing Service:
With these steps you configured a tracking for all CREATE / UPDATE / DELETE / INSERT operations in distinct tables. It is possible to go further, override the Query method of the original RIA Services in order to change the native RIA Services Query behavior and create Query strategies such as “Go back in time”, i.e. query data as if we were at a given past date.
T4 templates around the entity model can also be used for the following purposes:
- Usage statistics: Track sensitive data through statistical analysis on entity set content: automatically create reports on specific kind of data tables, or group of data tables linked by key associations
- Real time cube aggregation: support for queries similar to the MDX syntax (specific to the Business Intelligence world) on memory based structures. Servers can reach 48Gb memory and storing memory based structure is now efficient compared to pre-computed business intelligence cubes. Automatically defining dimensions and aggregation logic can be done through T4 templates on an original entity model. A documented initiative to create a memory based cube in .NET is available at Tales from a tradig desk.
4. Investing in a .NET 4.0 Data Management Platform
SQL Server 2008 R2 Master Data Services is an alternative way to manage common repositories and data versions; the source of its web based administration interface is provided, and this interface relies on WCF services that can be integrated in any legacy application. Master Data Services supports hierarchical data, although updating a hierarchy from their native interface can be sometimes painful: the unique version of the truth is based on a model, including entities and relationships, and views are generated from this abstract model. Automatic view versioning allows developers to not have to update their code when a model changes, custom Workflow Foundation rules can be added for data validation, and logging is natively supported. It answers to most needs as a master data service except for framework extensibility, where an Entity Framework / T4 approach offers many more options, especially in managing time (consult data as it was at any specific date, define versioning scheduling) and integrating data update process in legacy applications (linking business table with referential tables using foreign keys).
Data management is too crucial in an IT strategy to let an editor fix all the rules. The IT department should lead the development of transversal behaviors related to master data management, such as human workflow and data validation. SQL Server 2008 R2 Master Data Services is a good way to start, but if business referential data are updated frequently, applications need to view historical changes, advanced search or data aggregation features are required, a custom development based on Entity Framework and T4 would provide more options.