Welcome to the next post in the gRPC series. Until now we have covered the Protocol Buffers, gRPC Services, compilation, writing the Go gRPC server and testing them using grpcui. If you want to explore them you can have a look at the posts in the series listed below.
- gRPC vs REST
- Introduction to Protocol Buffers
- Protocol Buffers Compilation & Serialization
- Introduction to gRPC
- gRPC 101: Creating Services
Today, we will complete the remaining RPCs, add an in-memory storage and build clients using a language other than Go.
Completing RPCs
We only implemented static logic for the CreateTodo RPC in the last blog. Let’s modify the server code and also we will add an in-memory database layer for the service.
Updating Proto
We will also update the Todo.proto file, we are doing this to accommodate a uuid which is string instead of int64.
1 | message Todo { |
Using UUIDs avoids ID collisions across services and makes the service safer to scale horizontally.
Note: Make sure you run the make command to regenerate the Go proto and gRPC definitions.
Ideal Evolution
In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.
The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.
In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:
1 | oneof todo_id { |
Advantages of using oneof:
- only one of the fields is actually sent in the message (survives on the wire).
- If both are written, last one wins.
- Setting one automatically clears the other.
Hence our message becomes:
1 | message Todo { |
Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:
1 | message Todo { |
Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.
Final Todo.proto will look like:
1 | message Todo { |
Ideal Evolution
In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.
The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.
In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:
1 | oneof todo_id { |
Advantages of using oneof:
- only one of the fields is actually sent in the message (survives on the wire).
- If both are written, last one wins.
- Setting one automatically clears the other.
Hence our message becomes:
1 | message Todo { |
Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:
1 | message Todo { |
Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.
Final Todo.proto will look like:
1 | message Todo { |
Ideal Evolution
In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.
The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.
In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:
1 | oneof todo_id { |
Advantages of using oneof:
- only one of the fields is actually sent in the message (survives on the wire).
- If both are written, last one wins.
- Setting one automatically clears the other.
Hence our message becomes:
1 | message Todo { |
Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:
1 | message Todo { |
Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.
Final Todo.proto will look like:
1 | message Todo { |
Ideal Evolution
In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.
The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.
In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:
1 | oneof todo_id { |
Advantages of using oneof:
- only one of the fields is actually sent in the message (survives on the wire).
- If both are written, last one wins.
- Setting one automatically clears the other.
Hence our message becomes:
1 | message Todo { |
Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:
1 | message Todo { |
Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.
Final Todo.proto will look like:
1 | message Todo { |
Ideal Evolution
In Protobuf, compatibility is based on field numbers and wire types, not field names. If we change int64 id = 1; to string id = 1;, the old producers write a variable int but the new consumers expect length-delimited. The result will be corruption, the field is treated as unknown or causes data loss.
The ideal way of updating or evolution of a Protobuf is to add a new field and deprecate the old one.
In production system which are running already, the ideal way is to use oneof which means at most one of these fields may be set at a time and the code looks like:
1 | oneof todo_id { |
Advantages of using oneof:
- only one of the fields is actually sent in the message (survives on the wire).
- If both are written, last one wins.
- Setting one automatically clears the other.
Hence our message becomes:
1 | message Todo { |
Also once we’re fully sure the old field is gone, we will reserve the filed number and the field name like:
1 | message Todo { |
Since we are just developing, we will use uuid and reserve the “id” because we know that our clients will be using uuid only. But in real systems, please use oneof.
Final Todo.proto will look like:
1 | message Todo { |
Restructure Codebase
We will make a few changes to the codebase by adding new folders and moving some files to new locations.
- Create new folders
mkdir -p cmd/{internal/db,server}(works only with zsh or bash). - We will move the
server.goinside thecmd/serverfolder. - We will create the
db.goinsidecmd/internal/db/.
Database Layer
We will add a simple database layer to hold the Todos in-memory. This will help us mimic the database calls.
1 | package db |
We have used a map to store the Todo against the string UUID. Also instead of randomly generating an integer we are now using the uuid package by Google. To tackle the race conditions during load testing, we are using Mutex to lock while creating, deleting, updating and taking only read locks for get and list.
Note: This in-memory database is only suitable for demos and testing. Data will be lost when the server restarts, and it does not support persistence.
Updating Server
We now need to use the DB as a part of the server so that we can access the database methods for CRUD operations.
1 | package server |
CreateTodo RPC
We will modify the CreateTodo RPC to look like this:
1 | func (s *Server) CreateTodo(_ context.Context, req *pb.CreateTodoRequest) (*pb.CreateTodoResponse, error) { |
We are using the db.Create() to save a todo to the database. We are taking the title, description, priority received from the client (request), also assigning the unique integer as ID. Also note the validations.
GetTodo RPC
1 | func (s *Server) GetTodo(_ context.Context, req *pb.GetTodoRequest) (*pb.GetTodoResponse, error) { |
ListTodos
1 | func (s *Server) ListTodos(_ context.Context, req *pb.ListTodosRequest) (*pb.ListTodosResponse, error) { |
UpdateTodo
1 | func (s *Server) UpdateTodo(_ context.Context, req *pb.UpdateTodoRequest) (*pb.UpdateTodoResponse, error) { |
DeleteTodo
1 | func (s *Server) DeleteTodo(_ context.Context, req *pb.DeleteTodoRequest) (*emptypb.Empty, error) { |
At this stage, our Todo gRPC server supports the following RPCs:
CreateTodo: Create a new todoGetTodo: Fetch a todo by IDListTodos: List all todosUpdateTodo: Modify an existing todoDeleteTodo: Remove a todo
status.Error
You might have noticed this already but if you are not familiar with it, let me tell you about the gRPC status.Error function.
In gRPC, errors are communicated using structured status codes rather than arbitrary error messages. The status.Error function allows a server to return an error that includes both a gRPC status code and a human readable message. This enables clients to programmatically distinguish between different failure scenarios.
So, what’s next? In the last post we tested the RPCs using a GUI tool but in this post we will see how we can write a client and call the RPCs (more like how we do it in real production systems). Let’s proceed further.
gRPC Client
As we saw earlier, the client is responsible for establishing connections to a server and sending RPC requests to it. The client can be written in many programming languages, and gRPC supports languages like Go, Python, Java, C#, Ruby, and more…
A gRPC client is code does the followings:
- Uses the same .proto file as the server
- Knows the service methods and message types
- Connects to the gRPC server
- Calls RPC methods programmatically
In production, clients are mandatory, tools like Postman & grpcui are only for testing and debugging. Unlike REST clients, gRPC clients rely on generated service definitions, which provide strong typing and compile-time safety.
Let’s build a Node.js client for our Go gRPC server.
Initiate NPM
We will do the following:
- Create a new folder at the root called
clientusing the commandmkdir client. - Initiate the
npmproject for Node.js usingnpm init -y. - Install the required packages for Node.js, command -
npm install @grpc/grpc-js @grpc/proto-loader.
Client Code
Now we will create the JavaScript file client.js and we will add the following code:
1 | const grpc = require("@grpc/grpc-js"); |
In the above code we are following the example which is present in the official gRPC website for Node.js.
Executing Client
To execute the client code, we will run it via Node.js like node client/client.js.
As soon as the client will start it’s execution, we will see that it will start creating new todos and will return the listings as well.
Conclusion
Finally, we now have a fully working gRPC server and client. Congratulations on reaching this milestone, I hope you enjoyed reading the post as much as I did while writing it. I’d love to hear your feedback or suggestions so I can continue improving future posts.
I’d also like to note that we intentionally skipped the proper approach to implementing a listing RPC. Since we’re using a hash table as an in-memory database, I chose to keep things simple for now. I’ll hopefully cover this topic in a future post, so stay tuned.
Stay healthy and stay blessed!