In his new post - Slow REST Tim Bray tries to answer the question:
In a RESTful context, how do you handle state-changing operations (POST, PUT, DELETE) which have substantial and unpredictable latency?
Tim describes three different approaches for this situation developed as part of a Project Kenai in a form of a proposal - Handling Asynchronous Operation Requests. These approaches include:
- Resource-based approach which has
A new "Status" resource model..., with the following fields:
This resource object can be used as follows:- "uri" - URI upon which the client may perform GET operations to poll for completion. Each accepted asynchronous operation will receive a unique status URI, so that multiple operations may be initiated and tracked at once.
- "status" - Integer code describing the completion status (0=success, nonzero=error code), returned only when "progress" returns 100.
- "message" - Message suitable for reporting completion status to a human user, returned only when "progress" returns 100.
- "progress" - Integer percent completed indicator, which MUST return 100 only when the operation has been completed (either successfully or unsuccessfully).
For any and all PUT/POST/DELETE operations, we return "202 In progress" and a "Status" resource, ... designed to give a hook that implementers can make cheap to poll.
- Comet style implementation - keeping the HTTP channel open for the duration of a long running request.
- Initial response MUST have HTTP status 202 ("Accepted"),... and entity body containing the initial Status resource for this operation. In the status resource, the "uri" and "progress" fields MUST be populated, and the "progress" field MUST contain a value of 0 indicating that the operation is beginning.
- The URI value returned in the initial response MUST respond to GET requests by returning an updated version of the Status resource. Typically, the "progress" field will be increased towards 100, but MUST NOT be set to 100 until the operation completes.
- When the operation has completed (either successfully or unsuccessfully), a "final" representation of the Status resource MUST be returned, with a "progress" field set to 100, and a "status" field set to 0 (for successful completion) or a non-zero value for unsuccessful completion.
- "Web hooks" - using two independent "one-way" invocations - one to start a long-running operation and the other one to notify a requester that it is completed.
- The inbound representation of the operation request MAY contain a "webhook" field, whose value is a URI where the client expects a callback. If this field is not present, no webhook callback will be performed.
- When the operation has completed (either successfully or unsuccessfully), the server will perform a POST request to the webhook URI, with... an entity body containing the final Status resource for this operation.
- Client can match a completion report back to the original request by comparing the "uri" field value to the one returned in the initial Status response, or by providing unique webhook URIs for each asynchronous request.
Tim finishes his post by asking whether the whole:
... "Slow REST" thingie is a pattern that’s going to pop up again often enough in the future that we should be thinking of a standardized recipe for approaching it.
Tim’s post caused quite a few responses with an interesting one ffrom William Vambenepe, who is comparing problems and solutions described in Tim’s post with problems and solutions defined by WS* standards, specifically WSRF and WS-Notifications. According to William:
WSRF doesn't quite cover this use case (slow create), but between WS-ResourceLifetime and WS-Notification you see somewhat similar use cases at work (which BTW you may run into next). Add to this WS-MakeConnection (part of WS-RX) and your idea of "web hooks" becomes a lot more practical... I always had the intuition that "REST" and "WS-(Death)Star" would come a lot closer to one another.
In spite of many differences (some real, some religious) between REST and WS*, both camps aim to solve real life problems and consequently face the same challenges. Learning from each other experiences and implementations will definitely enhance both.