
gRPC vs REST vs GraphQL: How to Choose Your API Protocol
Why REST still dominates, what GraphQL actually solves, and when gRPC really shines. A practical breakdown of all three API protocols with real code and decision criteria.

Why REST still dominates, what GraphQL actually solves, and when gRPC really shines. A practical breakdown of all three API protocols with real code and decision criteria.
Once you ship a public API, you can't change it freely. Compare four versioning strategies for evolving APIs without breaking clients, plus analysis of real-world choices by GitHub, Stripe, and Twilio.

Why did Facebook ditch REST API? The charm of picking only what you want with GraphQL, and its fatal flaws (Caching, N+1 Problem).

Backend: 'Done.' Frontend: 'How to use?'. Automate this conversation with Swagger.

Developers can't control the hard disk directly. Instead, we must 'ask' the Kernel via an API. That request is the System Call.

I watched this happen. A team was designing API for a new service, and three backend engineers each advocated for REST, GraphQL, and gRPC respectively. Forty-five minutes later, the conclusion was "let's just go with REST."
Two years later, as the service grew and mobile clients joined, over-fetching became a real problem. They brought in GraphQL. Then internal service calls multiplied and gRPC came in. They ended up using all three.
If that meeting had asked "what is each one for?" instead of "which one is best?", they'd have saved 45 minutes and a lot of back-and-forth.
REST (Representational State Transfer) was defined by Roy Fielding in his 2000 PhD dissertation. Still powers 80%+ of APIs. Why?
REST uses HTTP methods (GET, POST, PUT, DELETE, PATCH) and status codes (200, 404, 500) directly. Anyone who knows HTTP understands REST without extra learning.
GET /users → list users
GET /users/123 → get specific user
POST /users → create user
PUT /users/123 → replace user
PATCH /users/123 → update user partially
DELETE /users/123 → delete user
# Testable by anyone immediately
curl -X GET https://api.example.com/users/123 \
-H "Authorization: Bearer token123"
# Response
{
"id": 123,
"name": "Dev Kim",
"email": "dev@example.com",
"role": "admin",
"createdAt": "2025-01-01T00:00:00Z"
}
REST leverages HTTP cache headers out of the box. CDNs, proxies, browser caches — all work.
HTTP/1.1 200 OK
Cache-Control: max-age=3600
ETag: "abc123"
Last-Modified: Mon, 01 Jan 2025 00:00:00 GMT
Over-fetching: You get more data than you need.
// Need just name and avatar but you get:
GET /users/123
→ {
"id": 123,
"name": "Dev Kim",
"email": "...",
"phone": "...",
"address": "...",
"preferences": { ... },
"createdAt": "..."
}
Under-fetching: Multiple round trips to build one screen.
To render a user profile page:
GET /users/123 → user info
GET /users/123/posts → posts
GET /users/123/followers → follower count
GET /users/123/following → following count
= 4 requests
Version management: /v1/, /v2/ clutters URLs and becomes a maintenance burden.
GraphQL was open-sourced by Facebook in 2015. Built to solve data fetching problems in mobile apps. Core idea: clients declare exactly what data they need.
query GetUserProfile($id: ID!) {
user(id: $id) {
name
avatar
posts(last: 5) {
title
createdAt
}
followerCount
}
}
{
"data": {
"user": {
"name": "Dev Kim",
"avatar": "https://...",
"posts": [
{ "title": "...", "createdAt": "..." }
],
"followerCount": 1024
}
}
}
One request. Only what you asked for. Over/under-fetching solved.
GraphQL has a strong type system. The schema is both documentation and contract.
type User {
id: ID!
name: String!
email: String!
posts: [Post!]!
followerCount: Int!
}
type Query {
user(id: ID!): User
posts(limit: Int, offset: Int): [Post!]!
}
type Mutation {
createPost(title: String!, content: String!): Post!
}
type Subscription {
postCreated: Post!
}
// Server resolvers (Node.js + Apollo)
const resolvers = {
Query: {
user: async (_, { id }, context) => {
return await context.db.users.findById(id);
},
},
User: {
// Solve N+1 with DataLoader
posts: async (user, _, context) => {
return await context.loaders.postsByUser.load(user.id);
},
},
};
N+1 problem: Nested resolvers can explode DB queries. DataLoader solves it, but requires extra setup.
Caching is hard: Queries go via POST, so HTTP caching doesn't work out of the box.
Learning curve: Schema design, resolver implementation, DataLoader patterns — steep at first.
Malicious queries: Without depth and complexity limits, a deeply nested query can kill your server.
# This will destroy an unprotected server
query Evil {
users {
posts {
comments {
author {
posts {
comments { ... }
}
}
}
}
}
}
Always implement depth limiting and query complexity scoring.
gRPC was open-sourced by Google in 2016. Uses Protocol Buffers (protobuf) and runs on HTTP/2. Strong suit is internal service communication, not external APIs.
Binary format instead of JSON. Define the schema first.
// user.proto
syntax = "proto3";
service UserService {
rpc GetUser(GetUserRequest) returns (User);
rpc CreateUser(CreateUserRequest) returns (User);
rpc StreamUsers(StreamUsersRequest) returns (stream User);
}
message User {
string id = 1;
string name = 2;
string email = 3;
int64 created_at = 4;
}
message GetUserRequest {
string id = 1;
}
Generate code from this .proto file:
protoc --go_out=. --go-grpc_out=. user.proto
protoc --ts_out=. --grpc-web_out=. user.proto
func (s *UserServer) GetUser(ctx context.Context, req *pb.GetUserRequest) (*pb.User, error) {
user, err := s.db.FindUser(req.Id)
if err != nil {
return nil, status.Errorf(codes.NotFound, "user not found: %v", err)
}
return &pb.User{
Id: user.ID,
Name: user.Name,
Email: user.Email,
CreatedAt: user.CreatedAt.Unix(),
}, nil
}
// Server streaming
func (s *UserServer) StreamUsers(req *pb.StreamUsersRequest, stream pb.UserService_StreamUsersServer) error {
users, _ := s.db.FindUsers(int(req.Limit))
for _, user := range users {
stream.Send(&pb.User{Id: user.ID, Name: user.Name})
}
return nil
}
const client = new proto.user.UserService(
"user-service:50051",
grpc.credentials.createInsecure()
);
// Unary call
client.GetUser({ id: "123" }, (err: Error, response: any) => {
console.log(response.name);
});
// Server streaming
const stream = client.StreamUsers({ limit: 100 });
stream.on("data", (user: any) => console.log(user.name));
stream.on("end", () => console.log("done"));
Performance: Protobuf is 3-10x smaller than JSON. HTTP/2 multiplexing improves connection efficiency.
JSON: ~80 bytes
Protobuf: ~20-30 bytes (3x smaller)
Four communication patterns:
Unary: 1 request → 1 response
Server Streaming: 1 request → n responses (stream)
Client Streaming: n requests → 1 response
Bidirectional: n requests ↔ n responses (full duplex)
Strong typed contracts: proto file is the server-client contract. Code generation catches type mismatches at compile time.
No native browser support: HTTP/2 trailers aren't exposed to browsers. Needs grpc-web or an Envoy proxy.
Hard to debug: Binary format means no curl. Need grpcurl or Postman gRPC support.
grpcurl -plaintext -d '{"id":"123"}' \
localhost:50051 user.UserService/GetUser
Schema changes need care: Reusing field numbers corrupts data.
// WRONG — reusing field number 2
message User {
string id = 1;
// name was field 2, deleted
string email = 2; // DO NOT do this
}
// RIGHT — reserve the old field number
message User {
string id = 1;
reserved 2;
string email = 3;
}
| Dimension | REST | GraphQL | gRPC |
|---|---|---|---|
| Wire format | JSON/XML | JSON | Protobuf (binary) |
| Protocol | HTTP/1.1 | HTTP/1.1 | HTTP/2 |
| Browser support | Native | Native | Needs grpc-web |
| Caching | HTTP cache native | Manual | None |
| Type safety | None (OpenAPI helps) | Schema types | Proto types |
| Learning curve | Low | Medium | High |
| Over-fetching | Yes | No | No |
| Streaming | Limited (SSE) | Subscriptions | Native |
| Performance | Medium | Medium | High |
| Ecosystem maturity | Very mature | Mature | Growing |
| Primary use case | Public APIs | Complex data fetch | Internal services |
Stripe, GitHub, Twitter APIs → all REST
Reason: developer friendliness, easy documentation, caching
GitHub GraphQL API, Shopify → GraphQL
Reason: partners have varying data requirements
Google internal services, Netflix internal comms → gRPC
Reason: performance, type safety, multi-language support
In practice, most mature services use a mix.
Architecture:
External clients (mobile/web)
↓ REST or GraphQL
API Gateway / BFF
↓ gRPC
┌──────────────────────────────┐
│ User Service Payment Svc │
│ (gRPC) (gRPC) │
└──────────────────────────────┘
BFF (Backend For Frontend) pattern: REST or GraphQL for clients, gRPC between internal services.
// BFF layer: GraphQL → gRPC translation
const resolvers = {
Query: {
user: async (_, { id }) => {
return await grpcUserClient.getUser({ id });
},
},
};
"Which one is best?" is the wrong question. The right question is "what fits this situation?"
Combining all three is completely valid — and it's what most mature engineering orgs actually do.