This is a complex problem, and I'm not sure there is a single best answer.
The problem you're facing is one of coupling - you have an artefact in your solution that is a dependency for several other artefacts. One of the oldest architectural principles in software is to reduce coupling - it leads to bugs, slower development, and code that's harder to modify.
There are a number of ways you can reduce coupling in software design. The classical answer is to introduce interfaces and dependency injection; that's obviously not practical with a database.
The "cleanest" way is for your database to be accessible via an API. You can use versioning to allow multiple applications to manage that dependency - Application A was coded against version 1.0 of the API, and you promise not to change that version (or at least not without deprecation notices). This does introduce a significant amount of additional work, as well as a potential performance challenge.
A more pragmatic solution is to insist on an automated deployment mechanism for all your environments, and to retrieve the database schema from a central location, and insist that all applications include a set of integration tests to exercise all the database functionality. You might distribute your dev and test databases via VM images, for instance, with "read-only" status for your schema.
I have done this using a variety of techniques. The first step is to create a process for converting your database into text files that can be managed in a source code repository, and played back to create a working database. Here's an SO question with lots of info on how to do this.
Once you have your database under version control, you can decide how to distribute it - either asking projects to check out the database and build it themselves, or as docker images, or VMs or whatever.
You depend on your applications to do the right thing - it's not enforced by a technical feature - but you don't have to introduce significant new layers to your solution.