6 min read

Geit Vinglas

BY Geit Vinglas Dec, 28, 2020

The question is which way we prefer computer systems to communicate with each other.

REST has been one of the main communication pathways for services for around two decades. The system is incredibly simple and often de facto standard for communication, which is known and taught by probably every developer there is. But sometimes it’s great to ask yourself - could we do better? Our answer is yes, we could - with gRPC.


Communication in gRPC is a bit different than in REST. Clients can run server methods the same way as if they would exist in the clients’ code base, making it look like there isn’t even any communication happening between external parties. This makes development a lot easier and won’t let users make any mistakes on calling or receiving data. For gRPC, the server defines and implements the interface while the client will generate stubs from protocol buffer to use.

gRPC communication diagram
Figure 1. gRPC communication diagram (

Additionally, the client and server won’t need to be written in the same language. The support for gRPC is already in 11 languages, like Java, Go, Kotlin, Ruby, PHP and other more common languages. However, if we look at that side, then we need to give points to REST because it is supported by way more languages - more than Google can even show in the first page of the search.

Protocol buffer vs JSON - which format to prefer?

One of the biggest differences in REST vs gRPC is the data transfer format. REST uses primarily JSON while gRPC uses protocol buffers. Protocol buffers are like XML, but actually usable. JSON has the upside of being human-readable, as protocol buffers are in binary format. It may come as a surprise, but human readability has some downsides – it will cost more for the data packet size which makes each request bulkier. So, gRPC loses on readability, but wins on speed as it’s noticeably faster.

Using protocol buffers, gRPC brings in formal format and structure validation. The formats are self-describing and data is validated automatically by the caller methods. And it’s quite easy to implement it.

If a new client wants to connect to the service, they take a .proto file and generate themselves methods for their language.

Defining a request and response object with its fields is also relatively simple. You have to define field types and the position of each field. In Figure 2, we can see EmailRequest with string field body is the third field in the request object.

email request json
Figure 2. email request JSON

After creating request and response objects, we can create definitions for the methods.

We have defined service with 2 methods on Figure 3. One method is regular request – response, another is using streams for both request and response. This is so that requests and responses are handled in parallel – service won’t wait for response from server, as it is handling responses in another thread.

gigantic messaging service
Figure 3. gigantic messaging service

Which is faster?

gRPC boasts about its speed and data compression for lower data transfer sizes. So, let’s put these claims up for a test against the good-old REST.

For testing, we’ve created 2 systems - gigantic-billing (written in Go) and gigantic-messaging (written in Java). Here, we’ve created a hypothetical model, where gigantic-billing would like to send its clients invoices at the end of the month. It sends these messaging requests to the gigantic-messaging module. Messaging service will be the server and billing will be the client, in this case.

To put these two up against each other, we tested 3 different ways of communication. First, REST – using POST request and JSON payload. Second, gRPC in its most basic form – normal request/response, most similar to REST. And for the third case, we tried gRPC bidirectional streaming – both requests and responses were being streamed between server and client. For data, 100 000 different requests were generated, which were handled asynchronously in two cases, with an exception being gRPC bidirectional streaming. On bidirectional streaming, the request handling was separate from response handling, so request-response were running in parallel. Both projects ran on the same machine.

Table 1. timing of requests
Request typeAverage time per callTotal time
REST POST json638.067µs63.807s
gRPC stream request171.027µs17.103s
gRPC stream response171.199µs17.120s

Now, let’s take this data with a grain of salt. First, both REST and gRPC were running just as they came out-of-the-box. Second, this is the best scenario that could happen with gRPC, as it thrives in packets with more data. However, we do see the same size payloads moving in our production environments most of the time, so it still reflects most of our use-cases in Concise.

As we can see, gRPC has beaten REST in this race by 2 times. This is quite significant. And as for streaming, the speed was even greater – almost 4 times faster than REST.

We should mention that when we add up stream request and response times, we get about the same as in normal gRPC call. The speed only comes from being able to process both in parallel. Still, this gives us a staggering total time of just over 17s for all calls in streaming mode.

Should we switch to gRPC?

In the end, we found that gRPC is pretty good at what it says it does – communication between microservices. I don’t see any reason why not to use it internally. Externally, communication between third parties is harder to justify. The probability that the third party already knows how to use gRPC is quite unlikely. But when we can speed up communication between our internal services, then that is already a big step ahead.

One big drawback for gRPC is the problem of data not being human-readable while transferring between applications themselves. This makes debugging harder if you don’t have access to one or another side of the communication. With REST, you have JSON that anyone understands, and debugging is not a problem.

However, issues shouldn’t happen as we have a .proto file that every client needs to use to generate methods for their applications. Network errors (timeout, can’t reach server, etc.) must be handled, but other than that, any other surprises shouldn’t come up. If they do, you debug the application the same way as when you see anomalies between two method calls in an application itself.

So, if it’s a third party - use REST. If it’s an internal microservice, then give gRPC a go. Hopefully this small test gave you an idea to try out.

Both test services can be found on GitHub: