When building a database-backed project using an ORM, developers can choose between starting with database tables, classes, or abstract models. To open the debate, we offer you Frans Bouma’s argument that, “Code-first O/R mapping is actually rather silly”.
Starting with code in the form of entity classes is equally odd as starting with a table: they both require reverse engineering to the abstract entity definition to create the element 'on the other side': reverse engineer the class to the abstract entity definition to create a table and the mappings is equal to reverse engineering a table to a class and create the mappings. What's the core issue is that if you start with a class or a table, you start with the end result of a projection of an abstract entity definition: the class didn't fall out of the sky, it was created after one determined the domain had such a type: e.g. 'Customer', with a given set of fields: Id, CompanyName, an address etc.
He continues,
I know the whole idea of 'code first' comes from the fact developers want to write code and think in code and want to persist objects to the database, but that's not what happens: you don't persist objects to a database, you persist their contents, which are the entity instances. It might very well be that an entity class instance (so an object) contains more data than the entity instance, so storing 'the object' then doesn't cover it. Serializing an object is a good metaphor here: with serialization, the object isn't serialized, but a subset of its data, to a form which might not match its source. When deserializing the data into e.g. javascript objects are we then still talking about the original .NET object? No of course not, it's about the data inside the object which lives on elsewhere.
Isn't it then rather odd that when serializing 'objects' to JSON, the overall consensus is that the data is serialized, but when the same object is serialized to a table row, it's actually persisted as a whole? If you are still convinced O/R mapping is about persisting objects, what happens with 'your object', persisted to a table row, if that object is read by a different application, which targets the same database, and which doesn't use an O/R mapper? That application, written in an entirely different language even, can perfectly fine read and consume the entity instance stored in the table row, without even knowing you considered it a persisted .NET object. Because, surprise, the contents of the table row isn't a persisted object, it's a persisted entity instance, an instance of an abstract entity definition, not an instance of a class definition.
Reddit user remy_porter has a different take on the issue,
I think the problem is that Model First in EF is terrible. I hate the GUI tool that it forces you into.
My favorite way is to use CodeFirst to do a meet-in-the-middle approach- I write my object model in the way that makes the most sense, I write my database model in the way that makes the most sense, and then I use the FluentAPI to make them match.
Although, I do admit, I mostly use it to blindly throw object graphs at the database because I told management we should be taking a NoSQL approach to this application, but they ignored me (the data model is about storing documents with variable structures, but they demanded SQL Server).
Nishruu prefers it for testing,
Yeah, I usually use the Fluent API to map the existing database, which I design first.
The only place when code-first is actually useful is for quick integration/unit tests with either in memory SQLite DB or some kind of LocalDB - then you can roughly and quickly re-create the database structure for testing.
It's especially nice in NHibernate, where it works really well with in-memory SQLite DB. With EF - not so much, I'm using LocalDB and just re-creating the schema from MDF file...
InfoQ invites you to read Frans Bouma’s full argument and then tell us your opinion.