BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Give REST a Rest with RSocket

Give REST a Rest with RSocket

This item in japanese

Key Takeaways

  • Representational State Transfer (REST) has become the de facto standard for communicating between microservices. The author argues that is not a good thing —- in fact, it’s a very bad thing, particularly for microservice communication
  • REST was implemented as a hack on top of HTTP. An often-cited reason to use RESTful web services is that it’s easy to debug because its “human readable”. Not being easy to read is a tooling issue.
  • Some of the things we would want in a protocol designed for microservice communication include binary serialisation, bi-directional communication, multiplexing, and the ability to exchange metadata.
  • Engineers want the ability to process data as it comes -- they want to be able to stream data. For data that is sent via streams, application flow control is needed.
  • We need a modern material to replace HTTP for creating modern services. Open source RSocket is designed for services. It is a connection-oriented, message-driven protocol with built-in flow control at the application level.

Representational State Transfer (REST) has become the de facto standard for communicating between microservices. That is not a good thing — in fact, it’s a very bad thing.  How did this come to pass? Well, at the time REST emerged, there were even worse options. When Roy Fielding proposed REST in 2000, REST was a kale sandwich in a field of much worse tasting sandwiches.

People were using SOAP, RMI, CORBA, and EJBs. JSON was a welcome respite from XML. It was easy to use URLs to spit out some text. Plus, JavaScript started to really take off in browsers and it was much easier to deal with REST, than it was SOAP. Unlike the recent microservice trend, most applications were the traditional monolithic 3-tier application. The source of most of the external traffic they were talking with was a browser, so when they had to produce something REST was an easy choice. Many people began to move from bigger commercial offerings like WebSphere to Jetty and Tomcat. These didn’t even have the facilities to deal with EJBs, so REST was a convenient choice.

What does this have to do with microservices? Early microservice pioneers moved to microservices for a different reason than people are doing it today. They moved to them because they had to deal with massive scale. They started to get so many users that they couldn’t serve everything in a single monolith. And unlike many enterprises today, cost wasn’t the motivating factor — time was.  They needed to get their services out yesterday. As they got more and more users their monolith wasn’t cutting it, so they cut their app up into smaller pieces. They could deploy these applications on thousands of servers, and then eventually virtual machines.

Furthermore, they could deploy their applications very quickly. Companies that adopted this model were able to survive. During this race though, there wasn’t much time to consider what they were doing. These early pioneers had to deal with exponential user growth and competition, so it makes sense they would opt for tactical solutions. One of these was using REST to communicate between services.

Why REST is bad for Microservices

When programming an application, your programing language eventually ends up as machine code. This is obvious. Even an “interpreted” language like Java or JavaScript does, as well. Instead of compiling directly to machine code, they use a JIT or just-in-time compiler. In some cases, JIT’ed code can be faster than what an engineer can write and tune by hand — VMs are truly a miracle of modern computer science.

Why then do we waste this miracle? Instead of sending binary messages optimized for machines, on a protocol optimized for services, we send messages optimized for humans. We send around things like JSON and XML using a protocol that was designed for sending books. Think how ridiculous this is! You have a program that is binary, that turns a binary structure to text, sends it over the network in text, to a machine that parses and turns it back into binary structure to be processed in an application.

Avoiding cache misses on a modern CPU is critical. Unfortunately, parsing tons of JSON and Strings is going to cause cache misses!

An often-cited reason to use REST is that it’s easy to debug because its “human readable”. Not being easy to read is a tooling issue. JSON text is only human readable because there are tools that allow you to read it – otherwise it’s just bytes on a wire. Furthermore, half the time the data being sent around is either compressed or encrypted — both of which aren’t human readable. Besides, how much of this can a person “debug” by reading? If you have a service that averages a tiny 10 requests per second with a 1 kilobyte JSON that is the equivalent to 860 megabytes of data a day, or 250 copies of War and Peace every day. There is no one who can read that, so you’re just wasting money.

Then, there is the case where you need to send binary data around, or you want to use a binary format instead of JSON. To do that, you must Base64 encode the data. This means that you essentially serialize the data twice — again, not an efficient way to use modern hardware.

At the end of the day, REST was implemented as a hack on top of HTTP. And HTTP is being used as a hack to send transport data between services. HTTP was designed to schlep books around the Internet. It shouldn’t be used for services to communicate with one other. Instead, use a format that is optimized for your application — the thing that is processing all the data.

What is good Microservice Communication?

If we suppose for a moment that REST isn’t the best choice for service to service communication, then what is? Let’s look at some of the things we would want in a protocol designed for microservice communication.

For starters, we want things to be bi-directional. That’s a huge problem with REST — clients can only call servers. When both sides have equal ability to call each other, you can create interactions between applications in a natural manner. Otherwise you are forced to devise clunky workarounds such as long-polling to simulate server-initiated calls. You can partially get around it with HTTP/2, but the call still needs to be initiated by the client. What you want is the ability for clients and servers to be free to call each other as necessary.

Another requirement is the connection between services must support multiple requests on same connection – at the same time. This is called multiplexing. Now, with a single connection, there needs to be some way to distinguish one request from another. This is unlike HTTP where one request starts when another one ends. With multiplexing, you are going to need keep track of the different requests. A good way to do this is having each request represented with a binary frame. Each frame can hold the request, as well as metadata about the request. Then, it can be used to get the frame to the correct location.

When sending data over a single connection, you need the ability to fragment requests. A large request with a single connection will block all the other requests behind it, aka head-of-the-line blocking. What is needed, instead, is to fragment the requests into smaller sizes and send those over the network. Since data being sent is framed, it can be broken into smaller frame fragments, and then reassembled on the other side. This way, requests can interleave with each other. No longer can a large request block a smaller request. This will create a much more responsive system.

Also, the ability to exchange metadata about a connection is useful. Sometimes there is data to send that isn’t necessarily part of a business transaction — things like configuring the overall tracing level or exchanging information for dictionary-based compression. These are things that don’t have to do with business logic but could be controlled at a connection level. The ability to exchange metadata would provide for that.

Often in application code, a function or method will be called that takes a list, returns a list, or both. This happens in microservices all the time, as well. REST doesn’t deal with these situations well and this leads to all sorts of hacks and complexity.

What’s needed is a protocol that can deal with iterative data easily and naturally — like you do in your application. It doesn’t make sense to read an entire list of data, process it and then return a list of data once everything is processed. What you want is the ability to process data as it comes. You want to be able to stream data. If there is a long list of data, you don’t want to wait for that data to be processed — you want to send the data off as it becomes available and get the responses back as they occur.

This will create a much more responsive system. It can be used for all sorts of things from reading bytes from a file and streaming it over the network, to returning results from a database query, to feeding browser click-stream data to a back-end. If first-class streaming support is present in the protocol, it’s not necessary to include another system like Spark to do stream processing. Nor is it necessary to include something like Kafka unless you want to store data.

For data that is sent via streams, the next thing needed is application flow control. Byte-level flow control works for something like TCP because everything is the same size, and generally, the same cost to process from the perspective of the network card. However, in an application, not everything is the same cost. There could be a message that is 10 kilobytes that takes 10 milliseconds to process, but another message that is 10 bytes that takes 10 seconds.

Another scenario found in microservices is that downstream services process data at slower rates than the data can be processed. This means that TCP buffers are never full. There needs to be some way to control the flow of traffic to keep from overwhelming downstream services in order to keep them responsive.

The application must be able to control the rate that messages can flow independent of the underlying network bytes. For an application developer it is difficult to reason how many bytes a message is especially between languages. On the other hand, it is simple for a developer to reason about how many messages they are sending. This way, the service can arbitrage between the network flow control and the application flow control. Sometimes an application can process data faster than the network, and other times, the network can process data faster than the application. Having application flow control will ensure that tail latency is stable as well — again creating a responsive application. It also prevents the need for unbounded queues, a dangerous hack that is found in other applications.

As mentioned above, a huge drawback of RESTful web services is that they are (de facto) implemented as text-based. To send any binary data requires you Base64 encode the data —and serialize everything twice. What you really want is something that is binary — because it can represent anything — including text. Also, it is significantly more efficient for your application to process binary data than text, especially numbers. Additionally, they are naturally more compact — they don’t have extra braces, curly brackets, or angle brackets in them.  Finally, if your data is binary, there is a possibility too for zero copy serialization and deserialization, depending on the format. This is a little out of the scope of this article, but check things out like Simple Binary Encoding (SBE), and Flatbuffers. They are significantly faster than using JSON.

Finally, you want to be able to send your requests over different transports. RESTful web services typically use HTTP, which uses only TCP. What you really want is a way to abstract the networking away, so that you only program to a specification and don’t have to worry about the transport. At the same time, if it’s talking to browsers your application should be able to run over WebSocket. You should not have to switch to a new networking toolkit every time you want to change where your application is deployed, it should be easy to swap out transports without any applications changes.

Which Protocol Fits the Bill?

Some would suggest that REST and HTTP/2 are a better fit. HTTP/2 is better than HTTP/1 but if you read the specs, its sole purpose is to create a better web browser protocol. It was never designed or intended for use in microservices. And that is what it should be used for — server HTML to web browsers. Again, it was never intended for microservices communication. Furthermore, you still must deal with URLs and matching the different HTTP methods to your application — these methods were never really intended for server to server communication.

HTTP/2 does provide streaming, but it only provides it for server push. So, using REST over HTTP/2 requires initiating a request on a client and then pushing the data to the server. HTTP/2 flow control is byte-based flow control. This is good for a web browser, but not good for an application. There is still no way to control the flow of an application by the way that work is being done on an application.

There has been a lot of noise lately about using gRPC. gRPC is very similar in concept to SOAP. Instead of using XML to define services, it uses Protobuf. Like SOAP, it’s a hodge-podge of URL and Header magic — this time using HTTP/2. This means gRPC is explicitly tied to HTTP/2, a protocol designed for web browsers. And what is worse, it isn’t supported in a web browser.

Instead you must use a proxy to turn your gRPC calls in to REST calls, thus defeating the purpose for using it. This highlights how poorly designed gRPC is. Why would you use HTTP/2 for a protocol and not make sure it works in a browser? You are forever limited by its original purpose, yet not able to use it where it was intended. This leads to my next point: the biggest limitation of REST is the fact it’s tied to HTTP.

What you want is a protocol that is designed for service-to-service communication. Using a protocol that is specifically-designed for services to talk to each other will create significantly simpler and more reliable applications. There will not be any hacks, workarounds, or impedance mismatches.

Construction materials are a good analogy. Wood is great for building small bridges. You can use it to span a small stream or creek and it isn’t a problem.

When engineers started using it to span wider distances things got complicated.

Wood bridges like this worked. But, they had a very high failure rate compared to modern bridges made of better materials. They were also very complicated and took much, much longer to build. This why we now use steel and concrete. They are easier to maintain, cheaper to build, last longer, and can span far greater distances.

We need a modern material to replace HTTP for creating modern services. Open source RSocket is designed for services. It is a connection-oriented, message-driven protocol with built-in flow control at the application level. It works in a browser equally as well as on a server. In fact, a web browser can serve traffic to backend microservices. It is also binary. It works equally well with text and binary data, and the payloads can be fragmented. It models all the interactions that you do in your application as network primitives. This means you can stream data or do Pub/Sub without having to setup an application queue.

REST is a decent solution where it makes sense. One place it doesn’t make sense is microservices. Distributed systems are difficult enough on their own. The last thing that we need is to make them more complex by using something not designed for them.

About the Author

Robert Roeser is co-founder and CEO of Netifi. He is a 10-year veteran of distributed real-time systems leading large scale technical projects at Netflix and Nike.

BT