gRPC 102: Creating Clients

Welcome to the next post in the gRPC series. Until now we have covered the Protocol Buffers, gRPC Services, compilation, writing the Go gRPC server and testing them using grpcui. If you want to explore them you can have a look at the posts in the series listed below.

  1. gRPC vs REST
  2. Introduction to Protocol Buffers
  3. Protocol Buffers Compilation & Serialization
  4. Introduction to gRPC
  5. gRPC 101: Creating Services
gRPC with it's mascot
gRPC with it's mascot

Today, we will complete the remaining RPCs, add an in-memory storage and build clients using a language other than Go.

Completing RPCs

We only implemented static logic for the CreateTodo RPC in the last blog. Let’s modify the server code and also we will add an in-memory database layer for the service.

Updating Proto

We will also update the Todo.proto file, we are doing this to accommodate a uuid which is string instead of int64.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
message Todo {
int64 id = 1 [deprecated = true];

string uuid = 7; <-- updated type

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;
}

Using UUIDs avoids ID collisions across services and makes the service safer to scale horizontally.

Note: Make sure you run the make command to regenerate the Go proto and gRPC definitions.

Ideal Evolution

In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.

The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.

In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:

1
2
3
4
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

Advantages of using oneof:

  • only one of the fields is actually sent in the message (survives on the wire).
  • If both are written, last one wins.
  • Setting one automatically clears the other.

Hence our message becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
message Todo {
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

// one complete eradication of
// `id` field from Todo message
// reserve field number 1 and field name `id`
// reserve 1;
// reserve "id";

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;

}

Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:

1
2
3
4
5
6
7
8
message Todo {
reserved 1;
reserved "id";

string uuid = 7;

// rest of the fields remains as they were
}

Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.

Final Todo.proto will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
message Todo {
reserve 1;
reserve "id";

string uuid = 7;

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;
}

Ideal Evolution

In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.

The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.

In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:

1
2
3
4
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

Advantages of using oneof:

  • only one of the fields is actually sent in the message (survives on the wire).
  • If both are written, last one wins.
  • Setting one automatically clears the other.

Hence our message becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
message Todo {
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

// one complete eradication of
// `id` field from Todo message
// reserve field number 1 and field name `id`
// reserve 1;
// reserve "id";

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;

}

Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:

1
2
3
4
5
6
7
8
message Todo {
reserved 1;
reserved "id";

string uuid = 7;

// rest of the fields remains as they were
}

Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.

Final Todo.proto will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
message Todo {
reserve 1;
reserve "id";

string uuid = 7;

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;
}

Ideal Evolution

In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.

The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.

In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:

1
2
3
4
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

Advantages of using oneof:

  • only one of the fields is actually sent in the message (survives on the wire).
  • If both are written, last one wins.
  • Setting one automatically clears the other.

Hence our message becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
message Todo {
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

// one complete eradication of
// `id` field from Todo message
// reserve field number 1 and field name `id`
// reserve 1;
// reserve "id";

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;

}

Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:

1
2
3
4
5
6
7
8
message Todo {
reserved 1;
reserved "id";

string uuid = 7;

// rest of the fields remains as they were
}

Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.

Final Todo.proto will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
message Todo {
reserve 1;
reserve "id";

string uuid = 7;

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;
}

Ideal Evolution

In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.

The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.

In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:

1
2
3
4
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

Advantages of using oneof:

  • only one of the fields is actually sent in the message (survives on the wire).
  • If both are written, last one wins.
  • Setting one automatically clears the other.

Hence our message becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
message Todo {
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

// one complete eradication of
// `id` field from Todo message
// reserve field number 1 and field name `id`
// reserve 1;
// reserve "id";

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;

}

Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:

1
2
3
4
5
6
7
8
message Todo {
reserved 1;
reserved "id";

string uuid = 7;

// rest of the fields remains as they were
}

Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.

Final Todo.proto will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
message Todo {
reserve 1;
reserve "id";

string uuid = 7;

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;
}

Ideal Evolution

In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.

The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.

In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:

1
2
3
4
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

Advantages of using oneof:

  • only one of the fields is actually sent in the message (survives on the wire).
  • If both are written, last one wins.
  • Setting one automatically clears the other.

Hence our message becomes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
message Todo {
oneof todo_id {
int64 id = 1;
string uuid = 7;
}

// one complete eradication of
// `id` field from Todo message
// reserve field number 1 and field name `id`
// reserve 1;
// reserve "id";

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;

}

Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:

1
2
3
4
5
6
7
8
message Todo {
reserved 1;
reserved "id";

string uuid = 7;

// rest of the fields remains as they were
}

Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.

Final Todo.proto will look like:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
message Todo {
reserve 1;
reserve "id";

string uuid = 7;

string title = 2;
optional string description = 3;
bool done = 4;

enum Priority {
PRIORITY_UNSPECIFIED = 0;
PRIORITY_LOW = 1;
PRIORITY_MEDIUM = 2;
PRIORITY_HIGH = 3;
}
Priority priority = 5; // <-- using the enum as a property

google.protobuf.Timestamp created_at = 6;
}

Restructure Codebase

We will make a few changes to the codebase by adding new folders and moving some files to new locations.

  • Create new folders mkdir -p cmd/{internal/db,server} (works only with zsh or bash).
  • We will move the server.go inside the cmd/server folder.
  • We will create the db.go inside cmd/internal/db/.

Database Layer

We will add a simple database layer to hold the Todos in-memory. This will help us mimic the database calls.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
package db

import (
"errors"
"sync"

pb "ashokdey.com/grpc-example/generated"
"github.com/google/uuid"
)

type DB struct {
storage map[string]*pb.Todo
lock sync.RWMutex
}

func NewDB() *DB {
return &DB{
storage: make(map[string]*pb.Todo, 10),
}
}

func (db *DB) Create(todo *pb.Todo) {
db.lock.Lock()
defer db.lock.Unlock()

// assign a new id
todo.Uuid = uuid.NewString()

db.storage[todo.Uuid] = todo
}

func (db *DB) GetByID(id string) (*pb.Todo, error) {
db.lock.RLock()
defer db.lock.RUnlock()

if id == "" {
return nil, errors.New("invalid id")
}
val, got := db.storage[id]
if !got {
return nil, errors.New("data not found for id")
}
return val, nil
}

func (db *DB) GetAll() []*pb.Todo {
list := []*pb.Todo{}

db.lock.RLock()
defer db.lock.RUnlock()

for _, val := range db.storage {
list = append(list, val)
}

return list
}

func (db *DB) UpdateByID(id string, todo *pb.Todo) error {
if id == "" {
return errors.New("invalid id")
}

db.lock.Lock()
defer db.lock.Unlock()

_, got := db.storage[id]
if !got {
return errors.New("data not found for id")
}

// update
db.storage[id] = todo
return nil
}

func (db *DB) DeleteByID(id string) error {
if id == "" {
return errors.New("invalid id")
}

db.lock.Lock()
defer db.lock.Unlock()

_, got := db.storage[id]
if !got {
return errors.New("data not found for id")
}
// delete
delete(db.storage, id)

return nil
}

We have used a map to store the Todo against the string UUID. Also instead of randomly generating an integer we are now using the uuid package by Google. To tackle the race conditions during load testing, we are using Mutex to lock while creating, deleting, updating and taking only read locks for get and list.

Note: This in-memory database is only suitable for demos and testing. Data will be lost when the server restarts, and it does not support persistence.

Updating Server

We now need to use the DB as a part of the server so that we can access the database methods for CRUD operations.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
package server

import (
"context"
"fmt"

db "ashokdey.com/grpc-example/cmd/internal/db"
pb "ashokdey.com/grpc-example/generated"
"google.golang.org/grpc/codes"
"google.golang.org/grpc/status"
"google.golang.org/protobuf/types/known/emptypb"
"google.golang.org/protobuf/types/known/timestamppb"
)

// the interface of TodoService embedded in our server
type Server struct {
pb.UnimplementedTodoServiceServer
db *db.DB
}

func NewServer() *Server {
return &Server{
db: db.NewDB(),
}
}

CreateTodo RPC

We will modify the CreateTodo RPC to look like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
func (s *Server) CreateTodo(_ context.Context, req *pb.CreateTodoRequest) (*pb.CreateTodoResponse, error) {
// create the todo
todo := &pb.Todo{
Title: req.GetTitle(),
Priority: req.GetPriority(),
Done: false,
CreatedAt: timestamppb.Now(),
}

if req.GetDescription() != "" {
desc := req.GetDescription()
todo.Description = &desc
}

// save in DB
s.db.Create(todo)
fmt.Println("created a todo")

// return the response
return &pb.CreateTodoResponse{
Todo: todo,
}, nil
}

We are using the db.Create() to save a todo to the database. We are taking the title, description, priority received from the client (request), also assigning the unique integer as ID. Also note the validations.

GetTodo RPC

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
func (s *Server) GetTodo(_ context.Context, req *pb.GetTodoRequest) (*pb.GetTodoResponse, error) {
if req.GetUuid() == "" {
return nil, status.Error(codes.InvalidArgument, "uuid should be provided")
}

// get from DB
val, err := s.db.GetByID(req.GetUuid())
if err != nil {
return nil, status.Error(codes.NotFound, "todo not found")
}

// return response
return &pb.GetTodoResponse{
Todo: val,
}, nil
}

ListTodos

1
2
3
4
5
6
7
8
9
func (s *Server) ListTodos(_ context.Context, req *pb.ListTodosRequest) (*pb.ListTodosResponse, error) {
// get all from db
todos := s.db.GetAll()

// return response
return &pb.ListTodosResponse{
Todos: todos,
}, nil
}

UpdateTodo

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
func (s *Server) UpdateTodo(_ context.Context, req *pb.UpdateTodoRequest) (*pb.UpdateTodoResponse, error) {
if req.GetUuid() == "" {
return nil, status.Error(codes.InvalidArgument, "uuid should be provided")
}

// get todo by id
val, err := s.db.GetByID(req.GetUuid())
if err != nil {
return nil, status.Error(codes.Internal, fmt.Sprintf("failed to get Todo: %v", err))
}

// update the values from req
val.Title = req.GetTitle()
val.Done = req.GetDone()

if req.GetDescription() != "" {
desc := req.GetDescription()
val.Description = &desc
}

// update the record
err = s.db.UpdateByID(req.GetUuid(), val)
if err != nil {
return nil, status.Error(codes.Internal, fmt.Sprintf("failed to update Todo: %v", err))
}

// return response
return &pb.UpdateTodoResponse{
Todo: val,
}, nil
}

DeleteTodo

1
2
3
4
5
6
7
8
9
10
11
12
13
func (s *Server) DeleteTodo(_ context.Context, req *pb.DeleteTodoRequest) (*emptypb.Empty, error) {
if req.GetUuid() == "" {
return nil, status.Error(codes.InvalidArgument, "uuid should be provided")
}

err := s.db.DeleteByID(req.GetUuid())
if err != nil {
return nil, status.Error(codes.Internal, fmt.Sprintf("failed to delete Todo: %v", err))
}

// return response
return &emptypb.Empty{}, nil
}

At this stage, our Todo gRPC server supports the following RPCs:

  • CreateTodo: Create a new todo
  • GetTodo: Fetch a todo by ID
  • ListTodos: List all todos
  • UpdateTodo: Modify an existing todo
  • DeleteTodo: Remove a todo

status.Error

You might have noticed this already but if you are not familiar with it, let me tell you about the gRPC status.Error function.

In gRPC, errors are communicated using structured status codes rather than arbitrary error messages. The status.Error function allows a server to return an error that includes both a gRPC status code and a human readable message. This enables clients to programmatically distinguish between different failure scenarios.

So, what’s next? In the last post we tested the RPCs using a GUI tool but in this post we will see how we can write a client and call the RPCs (more like how we do it in real production systems). Let’s proceed further.

gRPC Client

As we saw earlier, the client is responsible for establishing connections to a server and sending RPC requests to it. The client can be written in many programming languages, and gRPC supports languages like Go, Python, Java, C#, Ruby, and more…

A gRPC client is code does the followings:

  • Uses the same .proto file as the server
  • Knows the service methods and message types
  • Connects to the gRPC server
  • Calls RPC methods programmatically

In production, clients are mandatory, tools like Postman & grpcui are only for testing and debugging. Unlike REST clients, gRPC clients rely on generated service definitions, which provide strong typing and compile-time safety.

Let’s build a Node.js client for our Go gRPC server.

Initiate NPM

We will do the following:

  • Create a new folder at the root called client using the command mkdir client.
  • Initiate the npm project for Node.js using npm init -y.
  • Install the required packages for Node.js, command - npm install @grpc/grpc-js @grpc/proto-loader.

Client Code

Now we will create the JavaScript file client.js and we will add the following code:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
const grpc = require("@grpc/grpc-js");
const protoLoader = require("@grpc/proto-loader");
const path = require("path");

// path to the proto file (note: we are not compiling to js files)
const PROTO_PATH = path.join(__dirname, "..", "protos", "todo.proto");

const packageDefinition = protoLoader.loadSync(PROTO_PATH, {
keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true,
});

// define the todos and priorities
const priorities = ["PRIORITY_HIGH", "PRIORITY_LOW", "PRIORITY_MEDIUM"];
const todos = ["Learn gRPC", "Create gRPC Client", "Deploy gRPC"];

const proto = grpc.loadPackageDefinition(packageDefinition);
const TodoService = proto.grpc.todos.TodoService;

// create the client
const client = new TodoService(
"localhost:9001",
grpc.credentials.createInsecure()
);

// call the CreateTodo RPC
function CreateTodo(counter) {
return new Promise((resolve, reject) => {
console.log("CLIENT: creating a todo");
client.CreateTodo({
title: todos[counter],
description: "Dummy Description",
priority: priorities[counter]
}, (err, response) => {
if (err) {
console.error(err);
reject(err);
}
console.log(response);
resolve(response)
});
})
}

// call the ListTodos RPC
function ListTodos() {
return new Promise((resolve, reject) => {
console.log("CLIENT: listing todos");
client.ListTodos({}, (err, response) => {
if (err) {
console.error(err);
reject(err);
}
console.log(response.todos);
resolve(response);
});
});
}

// custom delay function
function delay(milis) {
console.log(`Waiting for ${milis/1000} seconds`)
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve();
}, milis)
})
}

// execute everything
(async () => {
for (let i = 0; i < 3; i++) {
await ListTodos();
await delay(2000)
await CreateTodo(i);
await delay(2000)
await ListTodos();
}
})();

In the above code we are following the example which is present in the official gRPC website for Node.js.

Executing Client

To execute the client code, we will run it via Node.js like node client/client.js.

As soon as the client will start it’s execution, we will see that it will start creating new todos and will return the listings as well.

Conclusion

Finally, we now have a fully working gRPC server and client. Congratulations on reaching this milestone, I hope you enjoyed reading the post as much as I did while writing it. I’d love to hear your feedback or suggestions so I can continue improving future posts.

I’d also like to note that we intentionally skipped the proper approach to implementing a listing RPC. Since we’re using a hash table as an in-memory database, I chose to keep things simple for now. I’ll hopefully cover this topic in a future post, so stay tuned.

Stay healthy and stay blessed!