The case for ROA
ROA applications offer clear advantages over other web service architectures. They are based on a simple, extremely scalable and highly standardised protocol (HTTP - it supports the entire internet). They offered a means out of the SOAP/RPC swamp that impressive-on-paper web-services so often get mired in when implemented. Best of all, the constraints imposed by REST ensure a clarity of design that's hard to achieve otherwise. As a consequence, the community loves REST, the Ruby based Rails framework already supports it out of the box and the popular Java framework Spring will do so soon.
Enterprises can benefit immensely from REST by breaking up their monolithic applications into tiny, specialised services which are cleanly architected, decoupled, easily integrated and which adhere to international standards. Yes, I know - SOAP 'enterprise web services' have promised this for a long time - but REST actually delivers. I could go on in this vein for hours, but several of my current and former colleagues at ThoughtWorks have already covered the topic in greater detail than I could hope to; see Jim Webber's blog and Duncan Crag's 'The Rest Dialogues: a nine part dialogue with an imaginary e-bay architect.'
Yet despite all this there remains this nagging problem of having the same application serve both usable HTML for human consumption and CRUD only JSON or XML for other applications without the code-base degenerating into spaghetti.
Two stage ROA
One of the solutions to this problem is to split the application into two stages - a REST 'engine' which conforms to pure CRUD actions on resources and has no UI, and a separate UI application which consumes these services and provides a user friendly interface which is free to deviate from CRUD. Case studies of such implementations are hard to come by though, and this was a setup always wondered about.
Happily for me, I've spent the last nine months working on developing and integrating several such two stage ROA applications and can say that while TANSTAAFL applies (as it usually does), this approach works quite well. The best part was the ease of integration - it becomes almost trivial, even when the service and the consumer are on completely different platforms. The degree of flexibility is illustrated by the possible variations of this setup:
- single engine fronted by different UI stages to serve different categories of users
- single UI stage integrating with more than one engine to set up complex workflows
- multiple engines being integrated
Performance is something one must pay attention to (this is where TANSTAAFL kicks in) because the price of that highly flexible and de-coupled architecture is the substantial increase in the number of HTTP requests being made per user action. The approach one can take is broadly similar to those used in ensuring that database interactions don't become too chatty. After all, a REST engine supports the same four basic operations as a database.
Integration testing across service boundaries is another immature area at the moment, but there is already work in progress to fill this gap.
Two stage ROA architectures work, and quite well too. They integrate easily across application, platform and language boundaries. They encourage extremely clean designs through the constraints imposed by REST principles. Chatty workflows must be avoided or minimised, though, or performance can start to suffer.