When REST API Feels Slow: A Taste of gRPC
Why I Started Learning gRPC
While transitioning to microservice architecture, service-to-service communication was a problem. We were using REST API, but JSON parsing overhead was high and HTTP/1.1 limitations prevented good performance. "Is there a faster communication method?"
The team advised "try gRPC," and after applying it, communication speed noticeably improved. The difference was especially clear when exchanging large amounts of data between services.
The Confusion
The most confusing part was "What is Protocol Buffers?" How is it different from JSON?
Another confusion was "Why use HTTP/2?" What's different from HTTP/1.1?
And "When to use gRPC and when to use REST?" Both are APIs, so what criteria determines the choice?
The 'Aha!' Moment
The decisive analogy was "letter vs phone call."
REST = Letter:
- Human-readable format (JSON)
- Postal system (HTTP/1.1)
- Process one at a time
gRPC = Phone call:
- Machine-readable compressed format (Protobuf)
- High-speed dedicated line (HTTP/2)
- Multiple conversations simultaneously (multiplexing)
This analogy helped me understand. gRPC is a protocol optimized for machine-to-machine communication. If humans aren't directly viewing it, there's no need to use an easy-to-read format like JSON.
What is gRPC?
gRPC is a high-performance RPC (Remote Procedure Call) framework created by Google. It allows calling functions on remote servers as if they were local functions.
Core Features
- Protocol Buffers: Binary serialization format (smaller and faster than JSON)
- HTTP/2: Multiplexing, header compression, server push
- Multi-language support: Define once, auto-generate for multiple languages
- Bidirectional streaming: Client and server can send data simultaneously
Protocol Buffers
Protocol Buffers (Protobuf) is a method for serializing structured data. Similar role to JSON, but much smaller and faster due to binary format.
.proto File Definition
syntax = "proto3";
package user;
message User {
int32 id = 1;
string name = 2;
string email = 3;
repeated string tags = 4;
}
message GetUserRequest {
int32 id = 1;
}
service UserService {
rpc GetUser (GetUserRequest) returns (User);
rpc ListUsers (ListUsersRequest) returns (stream User);
}
Node.js Server Implementation
const grpc = require('@grpc/grpc-js');
const protoLoader = require('@grpc/proto-loader');
const packageDefinition = protoLoader.loadSync('user.proto');
const userProto = grpc.loadPackageDefinition(packageDefinition).user;
function getUser(call, callback) {
const user = {
id: call.request.id,
name: 'John Doe',
email: 'john@example.com',
tags: ['developer']
};
callback(null, user);
}
const server = new grpc.Server();
server.addService(userProto.UserService.service, { getUser });
server.bindAsync(
'0.0.0.0:50051',
grpc.ServerCredentials.createInsecure(),
() => {
console.log('gRPC server running on port 50051');
server.start();
}
);
Node.js Client Implementation
const client = new userProto.UserService(
'localhost:50051',
grpc.credentials.createInsecure()
);
client.getUser({ id: 1 }, (err, user) => {
if (err) {
console.error(err);
return;
}
console.log('User:', user);
});
gRPC vs REST
| Feature | gRPC | REST |
|---|
| Protocol | HTTP/2 | HTTP/1.1 |
| Format | Protobuf (binary) | JSON (text) |
| Performance | Fast | Slow |
| Type safety | Strong | Weak |
| Streaming | Bidirectional | Limited |
| Browser support | Limited | Perfect |
| Human readable | Difficult | Easy |
| Use case | Microservices | Public API |
When to Use gRPC?
gRPC is Suitable
- Microservice communication: Fast exchange of large data between services
- Real-time communication: Chat, notifications, real-time data streaming
- Mobile apps: Battery and bandwidth-constrained environments
- Multi-language environment: Services written in different languages need to communicate
REST is Suitable
- Public API: API used by external developers
- Web browser clients: Direct calls from browsers
- Debugging important: Humans need to read and test directly
- Caching needed: Utilize HTTP caching mechanisms
Practical Tips
Error Handling
function getUser(call, callback) {
if (call.request.id <= 0) {
return callback({
code: grpc.status.INVALID_ARGUMENT,
message: 'User ID must be positive'
});
}
// ...
}
Metadata (Headers)
// Client
const metadata = new grpc.Metadata();
metadata.add('authorization', 'Bearer token123');
client.getUser({ id: 1 }, metadata, (err, user) => {
// ...
});
Timeout
client.getUser(
{ id: 1 },
{ deadline: Date.now() + 5000 },
(err, user) => {
// ...
}
);
The Power of Streaming
One of gRPC's most powerful features is streaming. There are 4 types.
1. Unary RPC
Similar to regular REST API.
service UserService {
rpc GetUser (GetUserRequest) returns (User);
}
2. Server Streaming RPC
Client makes one request, server sends multiple responses as stream.
service UserService {
rpc ListUsers (ListUsersRequest) returns (stream User);
}
// Server
function listUsers(call) {
const users = getUsersFromDB();
users.forEach(user => {
call.write(user); // Send one by one
});
call.end();
}
// Client
const call = client.listUsers({ page: 1 });
call.on('data', (user) => {
console.log('Received:', user);
});
call.on('end', () => {
console.log('All users received');
});
Use cases: Large data queries, real-time log streaming
3. Client Streaming RPC
Client sends multiple requests as stream, server responds once.
service UserService {
rpc CreateUsers (stream User) returns (CreateUsersResponse);
}
// Server
function createUsers(call, callback) {
const users = [];
call.on('data', (user) => {
users.push(user);
});
call.on('end', () => {
// Save all users to DB
saveUsersToDB(users);
callback(null, { count: users.length });
});
}
// Client
const call = client.createUsers((err, response) => {
console.log(`Created ${response.count} users`);
});
users.forEach(user => {
call.write(user);
});
call.end();
Use cases: Bulk data upload, file upload
4. Bidirectional Streaming RPC
Client and server simultaneously exchange data as streams.
service ChatService {
rpc Chat (stream ChatMessage) returns (stream ChatMessage);
}
// Server
function chat(call) {
call.on('data', (message) => {
console.log('Received:', message);
// Broadcast to all clients
broadcast(message);
});
call.on('end', () => {
call.end();
});
}
// Client
const call = client.chat();
call.on('data', (message) => {
console.log('Received:', message);
});
// Send messages
call.write({ user: 'Alice', text: 'Hello!' });
call.write({ user: 'Alice', text: 'How are you?' });
Use cases: Chat, real-time collaboration tools, games
gRPC-Web: Browser Support
gRPC doesn't natively support browsers because it can't fully control HTTP/2. But gRPC-Web enables gRPC in browsers.
Proxy Setup (Envoy)
# envoy.yaml
static_resources:
listeners:
- name: listener_0
address:
socket_address:
address: 0.0.0.0
port_value: 8080
filter_chains:
- filters:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: AUTO
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match:
prefix: "/"
route:
cluster: grpc_service
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: grpc_service
connect_timeout: 0.25s
type: LOGICAL_DNS
http2_protocol_options: {}
lb_policy: ROUND_ROBIN
load_assignment:
cluster_name: grpc_service
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: localhost
port_value: 50051
Browser Client
// Browser
import { UserServiceClient } from './generated/user_grpc_web_pb';
const client = new UserServiceClient('http://localhost:8080');
const request = new GetUserRequest();
request.setId(1);
client.getUser(request, {}, (err, response) => {
if (err) {
console.error(err);
return;
}
console.log('User:', response.toObject());
});
Performance Comparison: gRPC vs REST
Results from my actual measurements.
Test Environment
- Server: Node.js (Express vs gRPC)
- Data: 1000 user objects
- Network: Local (minimized latency)
Results
| Item | REST (JSON) | gRPC (Protobuf) | Difference |
|---|
| Payload size | 125 KB | 45 KB | 64% reduction |
| Response time | 180ms | 60ms | 67% faster |
| CPU usage | 35% | 18% | 49% reduction |
| Memory usage | 85 MB | 52 MB | 39% reduction |
The difference was even greater in mobile environments. Smaller payload meant reduced network costs and less battery consumption.
Wrapping Up
gRPC is an essential tool in the microservices era. It guarantees type safety with Protocol Buffers, provides high-performance communication with HTTP/2, and supports real-time communication with bidirectional streaming.
After introducing gRPC, I experienced service-to-service communication speed increase by more than 3x. The effect was especially significant in batch jobs exchanging large amounts of data. JSON parsing overhead disappeared, and concurrent request processing was much more efficient thanks to HTTP/2 multiplexing.
However, it's difficult to call directly from browsers, so we still use REST for public APIs. Our team's current strategy is to use gRPC for internal service communication and REST for external APIs.
Key Lessons:
- Use gRPC for microservice communication
- Use REST for public APIs or browser clients
- Utilize bidirectional streaming for real-time communication
- Use Protocol Buffers when type safety is important
When I Use gRPC vs REST
gRPC Use Cases in My Projects
-
Order Processing Service: Handles 10,000+ orders/day between microservices. gRPC reduced latency by 60% compared to REST.
-
Real-time Analytics Pipeline: Streaming data from multiple sources. Bidirectional streaming was perfect for this.
-
Mobile App Backend: Smaller payloads meant less mobile data usage and faster responses.
REST Use Cases in My Projects
-
Public API: External developers need easy-to-use, well-documented APIs. REST with Swagger is perfect.
-
Admin Dashboard: Web-based admin panel calls APIs directly from browser. REST is simpler here.
-
Webhooks: Third-party services send webhooks to our endpoints. REST is the standard.
Migration Strategy
If you're considering migrating from REST to gRPC:
Step 1: Start with Internal Services
Don't migrate everything at once. Start with high-traffic internal service communication.
// Before: REST
const response = await fetch('http://user-service/api/users/123');
const user = await response.json();
// After: gRPC
const user = await userClient.getUser({ id: 123 });
Step 2: Measure Performance
Track metrics before and after:
- Response time
- Payload size
- CPU usage
- Memory usage
Step 3: Gradual Rollout
Use feature flags to gradually roll out gRPC:
if (useGrpc) {
return await grpcClient.getUser({ id });
} else {
return await restClient.getUser(id);
}
Step 4: Keep REST for Public APIs
Maintain REST endpoints for external clients while using gRPC internally.
Final Thoughts
gRPC isn't a silver bullet. It's a powerful tool for specific use cases. The key is understanding when to use it and when to stick with REST. In my experience, the sweet spot is:
- Internal microservices: gRPC
- Public APIs: REST
- Real-time features: gRPC streaming
- Simple CRUD: REST
The performance gains are real, but they come with added complexity. Make sure your team is comfortable with Protocol Buffers and has good tooling in place before committing to gRPC.
For teams just starting with microservices, I recommend beginning with REST for simplicity, then gradually introducing gRPC for high-traffic internal services once you have solid monitoring and debugging infrastructure in place. This pragmatic approach minimizes risk while maximizing the benefits of both technologies.
Remember: the best technology choice isn't always the newest or fastest—it's the one that best fits your team's capabilities and your system's actual needs. Start simple, measure everything, and evolve your architecture based on real data, not hype. Choose wisely, implement carefully, and iterate continuously.