Over the past few years it would be hard to ignore the rise in popularity for using RESTful approaches to building enterprise applications. Now we seem to have moved beyond the REST vs WS-* debates, or whether or not REST and SOA are complimentary, to discussions around the maturity of REST-based implementations. Unfortunately it seems that even this could be an active area of confusion, debate and disagreement. When discussing maturity and REST in the same sentence, some individuals refer to the Richardson Maturity Model as the right approach to measure against. For instance, in his recent article Martin Fowler discusses the various levels in the model:
- Level 1: Introducing the concept of Resources into your architecture.
- Level 2: Supporting HTTP verbs.
- Level 3: HATEOAS.
As Martin states:
I should stress that the [Richardson Maturity Model], while a good way to think about what the elements of REST, is not a definition of levels of REST itself. Roy Fielding has made is clear that level 3 [Richardson Maturity Model] is a pre-condition of REST.
And Martin goes on to refer to a conversation he had with Ian Robinson that helped to put the model into context for them both:
[Ian] stressed that something he found attractive about this model [...] was its relationship to common design techniques.The result is a model that helps us think about the kind of HTTP service we want to provide and frame the expectations of people looking to interact with it.
- Level 1 tackles the question of handling complexity by using divide and conquer, breaking a large service endpoint down into multiple resources.
- Level 2 introduces a standard set of verbs so that we handle similar situations in the same way, removing unnecessary variation.
- Level 3 introduces discoverability, providing a way of making a protocol more self-documenting.
But although this model appears to have some support, there are differences of opinion in the REST community. For instance, in this article, which also refers to Roy's discussion on what makes a RESTful system, the author states:
So, by Roy’s strict criteria, hypermedia is a *precondition* of REST. Anything else should not call itself REST. So the maturity model actually looks like this:
- Level One : not REST
- Level Two : not REST
- Level Three : REST
However, one of the commenters points out that:
The thing is, if you miss out *any* of the [maturity] levels, you end up with something that isn’t REST (although I’d replace “HTTP verbs” with “a small set of predefined, globally agreed verbs”). In terms of implementation, you can’t be at level 3 without having gone through level 1, so I think the order makes sense.
And now Subbu Allamaraju, author of the RESTful Web Services Cookbook, enters the debate with a recent article on using the Richardson Maturity Model. In fact he states up front that the model should not be used to determine the RESTful-ness of an application. As he states:
Judging an app based on what REST constraints it supports and not based on whether it chose the right constraints to meet the desired quality attributes is a pointless exercise. It is like criticizing an application because it chose to use an RDBMS and not a NoSQL store without looking at the qualities that lead to that choice. It is equally silly to conclude that your RESTful app achieved the "glory of REST" with its choice to use the hypertext constraint – what matters is whether it met any ilities that matter for that app.
This created an interesting back-and-forth on the comments section of the article. For instance, Mike Amundsen states:
While I would agree that it is a mistake to assume that an implementation that adheres to the Fielding’s REST constraints is automatically the right implementation for a given task, I do not accept the assertion that the very act of assessing an implementation’s adherence to a set of constraints (per REST, C2, etc) is “a pointless exercise” or “silly.” [...] what is the message you want to convey here? IOW, why _not_ assess compliance? - what do you _not_ get from assessment that you think is important? - what do you get from assessment that is _not_ helpful? is there something misleading that comes form assessment? are there some assumptions buried in the act of assessment that are dangerous? misleading? unhelpful?
And follows up with his own interesting entry on the merits of determing the RESTful-ness of an application versus the usefulness of using REST for the application in the first place.
Subbu answers Mike's original comment with:
Assessment needs to have a context, and quality attributes provide that context. Assessment just around REST’s constraints may lead to poor/questionable decision making. [...] there is no universal goodness criteria.
To which Mike responds:
i think i see your POV. You are talking about early stage implementation behavior: “I will build a Web app today; these are the constraints I will use (because Fielding sez so, etc.).” In the case above, focusing on meeting some set of “constraints” is improper. As you would say, early work should focus on the “ilities” you wish to support. I assume then that you would still agree that _after_ identifying the qualities, it makes sense to select constraints that promote those qualities in the implementation (as Fielding does in his diss).
Later on in the comments, Ian Robinson enters the debate, where he agrees that it may be unwise to use Richardson's model blindly:
[...] Leonard originally created his heuristic to help developers understand REST – that’s all. He does so by drawing analogies between some general and familiar software development practices (e.g. divide-and-conquer; do-the-same-old-same-old-things-in-the-same-old-same-old-way) and the application of Web technologies (give everything its own address, use HTTP the way it was intended to coordinate the transfer of representations).
And elsewhere Guilherme Silveira, the author of Restfulie, is looking to build on and extend Richardson's model to produce 5 steps towards REST Architecture Maturity, which unlike Richardson's model is not tied to HTTP.
- Step 1: determine and use uniform interfaces.
- Step 2: use linked data to allow a client to navigate through a resource's state and relations.
- Step 3: add semantic "value" to the links.
- Step 4: create clients "in a way that decisions are based only in a resource representation relations, plus its media type understanding".
- Step 5: "code on demand teach [sic] clients how to behave in specific situations that were not foreseen, i.e. a new media type definition."
So is this 5 step approach better? Does it address the concerns of Subbu and others who state that the original model should not be used slavishly? Or is there a better approach out there?