10

I'm creating a system using a microservices architecture. There are two microservices A and B, each living in their own repository.

There is a user.proto file containing protobuf definition and gRPC method signatures. A uses generated user.pb.go as a server. B uses user.pb.go as a client (of A).

One way to structure this is with proto definition appearing in A, with B having a code dependency on A:

 A
 ├── pb
 │   ├── user.proto
 │   └── user.pb.go
 └── service.go
 B
 └── service.go

B-->A

Another way is to have another repo P containing the proto definitions, with A and B depending on the new repo:

 A
 └── service.go
 B
 └── service.go
 P
 ├── user.proto
 └── user.pb.go

A-->P
B-->P

Or the new repo could contain the proto file only, with generated code in both A & B:

 A
 ├── service.go
 └── pb
     └── user.pb.go
 B
 ├── service.go
 └── pb
     └── user.pb.go
 P
 └── user.proto

What's the better approach here?

Vitaly Isaev
  • 5,392
  • 6
  • 45
  • 64
kkyr
  • 3,785
  • 3
  • 29
  • 59

3 Answers3

1

If your team does not prefer monorepo, I think the third option is the most suitable. 1 repo for the proto files. Then, it can be included as git submodule to A and B (if you are using git). A and B can have their own protoc script and the generated protobuf files depend on the programming languages they are using.

hutabalian
  • 3,254
  • 2
  • 16
  • 13
  • I agree, the generated code will get out of sync if it is stored in a repo, plus it needs to be generated based on the language, so generating ahead of time already presumes one language, unless you generate for all languages needed up front. – sharkySharks Jan 31 '20 at 14:43
0

The second way is better.

Why? Because when you add service C, it drops naturally inline with A & B. With no import-path code changes needed for any of the existing components.

Yes, the third way can claim this, but it duplicates the generated user.pb.go across each micro-service. Better to consolidate to avoid duplication.

You can also use go's internal packages feature at you top level of your repo (i.e. at A & B level). Renaming P to internal ensures this subpackage can be used by A, B (or C etc) but is not an invitation for other third-party packages to reference this "internal" package.

colm.anseo
  • 19,337
  • 4
  • 43
  • 52
0

I think that the correct answer depends on the scale of your project.

For small projects or projects that may be used outside of your team (e.g. open-source or even proprietary products shipped with SDK to other teams) I would prefer the first approach. If you don't provide generated code (pb.go files) within main repository, your users won't be happy, because protoc is very far from being a user-friendly tool.

As a variation of the first option, I recall projects with a single repository for all of the microservices (so-called "monorepo"). They put generated pb.go files directly into vendor repository.

For large web-services with rich microservice backend the second one would be a better option. You will get a single repository with full definition of your API. This will improve consistency of your API and help you to set up release cycle. You may or may not put pb.go along with proto files, it depends on your needs, but we usually don't.

Vitaly Isaev
  • 5,392
  • 6
  • 45
  • 64