I'm doing load tests between services implemented in Node.JS, both services on the same machine connected through localhost
.
There are REST and gRPC client & server files. The main goal is to prove that gRPC is faster than an HTTP call because the use of HTTP/2, the use of protocol buffers that are more efficient than code/decode JSON...
But in my tests (sending an integer array) gRPC is so much slower.
The code is very simple for boths implementations, I have an auxiliar class to generate objects with sizes (in MB): 0.125, 0.25, 0.5, 1, 2, 5, 20. REST and gRPC server uses this auxiliar class so the object to send is the same.
The object send in the payload is like this:
{
message: "Hello world",
array: []
}
Where the array is filled with numbers until get the desired size.
And my .proto is like this:
syntax = "proto3";
service ExampleService {
rpc GetExample (Size) returns (Example) {}
}
message Size {
int32 size = 1;
}
message Example {
string message = 1;
repeated int32 array = 2;
}
Also I've running the application measuring only one call, to not create a loop and find the average, and also to not handle measuring time with callbacks. So I'm running the application 10 times and calculating the average.
REST server:
app.get('/:size',(req,res) => {
const size = req.params.size
res.status(200).send(objects[size])
})
REST client:
const start = performance.now()
const response = await axios.get(`http://localhost:8080/${size}`)
const end = performance.now()
gRPC server:
getExample:(call, callback) => {
callback(null, objects.objects[call.request.size])
}
And gRPC client:
const start = performance.now()
client.getExample({ size: size }, (error, response) => {
const end = performance.now()
})
To do more efficently I have tried:
- Compress data like this:
let server = new grpc.Server({
'grpc.default_compression_level': 3, // (1->Low -- 3->High)
});
I know I can use streaming
to get data and iterate over the array but I want to prove the "same call" in both methods.
And the difference is so big.
Other thing I've seen is that times using REST way are more "lineal" the difference between times is small, but using gRPC one call sending 2MB can be 220ms and the next one 500ms.
Here is the final comparision, as you can see the difference is considerably big.
Data:
Size (MB) | REST (ms) | gRPC (ms) |
---|---|---|
0,125 | 37.98976998329162 | 35.5489800453186 |
0,25 | 40.03781998157501 | 46.077759981155396 |
0,5 | 51.35283002853394 | 59.37109994888306 |
1 | 63.4725800037384 | 166.7616500457128 |
2 | 95.76031665007274 | 394.2442199707031 |
5 | 261.9365399837494 | 804.1371199131012 |
20 | 713.1867599964141 | 5492.330539941788 |
But I thought... maybe the array field can't be decode in an efficient way, maybe is the integer number which is not heavy for JSON... I don't know, so I'm going to try to send a string, a very huge large string.
So my proto file now looks like this:
syntax = "proto3";
service ExampleService {
rpc GetExample (Size) returns (Example) {}
}
message Size {
int32 size = 1;
}
message Example {
string message = 1;
string array = 2;
}
Now the object send is like this:
{
message: "Hello world",
array: "text to reach the desired MB"
}
And results are so differents, now gRPC is much more efficient.
Data:
Size (MB) | REST (ms) | gRPC (ms) |
---|---|---|
0,125 | 30.672580003738403 | 25.028959941864013 |
0,25 | 33.568540048599246 | 25.366739988327026 |
0,5 | 37.19938006401062 | 27.539460039138795 |
1 | 46.4020166794459 | 28.798949996630352 |
2 | 57.50188330809275 | 35.45066670576731 |
5 | 107.39933327833812 | 48.90079998970032 |
20 | 313.4138665994008 | 136.4138500293096 |
And the question: So, why sending an integer array is not as efficient as sending an string? Is the way protobuf encode/decode arrays? Is not efficient send repeated
values? Is related with the language (JS)?