222

After this comment to one of my questions, I'm thinking if it is better using one database with X schemas or vice versa.

I'm developing a web application where, when people register, I create (actually) a database (no, it's not a social network: everyone must have access to his own data and never see the data of the other user). That's the way I used for the previous version of my application (that is still running on MySQL): through the Plesk API, for every registration, I do:

  1. Create a database user with limited privileges;
  2. Create a database that can be accessed just by the previous created user and the superuser (for maintenance)
  3. Populate the database

Now, I'll need to do the same with PostgreSQL (the project is getting mature and MySQL don't fulfil all the needs). I need to have all the databases/schemas backups independent: pg_dump works perfectly in both ways, and the same for the users that can be configured to access just one schema or one database.

So, assuming you are more experienced PostgreSQL users than me, what do you think is the best solution for my situation, and why? Will there be performance differences using $x database instead of $x schemas? And what solution will be better to maintain in the future (reliability)? All of my databases/schemas will always have the same structure!

For the backups issue (using pg_dump), is maybe better using one database and many schemas, dumping all the schemas at once: recovering will be quite simple loading the main dump in a development machine and then dump and restore just the schema needed: there is one additional step, but dumping all the schema seem faster than dumping them one by one.

UPDATE 2012

Well, the application structure and design changed so much during those last two years. I'm still using the "one db with many schemas" -approach, but still, I have one database for each version of my application:

Db myapp_01
    \_ my_customer_foo_schema
    \_ my_customer_bar_schema
Db myapp_02
    \_ my_customer_foo_schema
    \_ my_customer_bar_schema

For backups, I'm dumping each database regularly, and then moving the backups on the development server. I'm also using the PITR/WAL backup but, as I said before, it's not likely I'll have to restore all database at once. So it will probably be dismissed this year (in my situation is not the best approach).

The one-db-many-schema approach worked very well for me since now, even if the application structure is totally changed. I almost forgot: all of my databases/schemas will always have the same structure! Now, every schema has its own structure that change dynamically reacting to users data flow.

user4157124
  • 2,809
  • 13
  • 27
  • 42
Strae
  • 18,807
  • 29
  • 92
  • 131
  • "all of my databases/schemas will ever have the same structure!" do you mean they all have the same structure? Or never? – Osama Al-Maadeed Jul 20 '09 at 13:15
  • Sorry, yes, they all have the same structure forever: if i change one, i'll change all of them ;) – Strae Jul 20 '09 at 13:52
  • If you have 1000 customer, that means you have to update 1000 schema? – Joshua Partogi May 07 '10 at 05:10
  • @jpartogi: yes, but i have to update just the tables structure, not the data. – Strae May 07 '10 at 14:48
  • So, what did you go in for finally? One question, though, although performance of queries, etc. can be controlled by tablespaces, schemas resulting into equivalent performance of multi-db vs multi-schema, any impact on WAL logs??? – Kapil Jan 27 '12 at 08:04
  • @Kapil: well, the design of the application has been radical changed during the time... let me update my question with few details – Strae Jan 27 '12 at 08:35
  • How do you ensure the security between two schemas? One DB connection can update multiple Schemas - this is a boon and bare as well – deepg May 02 '19 at 16:48
  • i have same issue for my SAAS application ! i have one mysql db for each customer , but i think in future how i can change schema for all user when each user have database , please help me? – Vahid Alvandi May 20 '21 at 04:43

8 Answers8

184

A PostgreSQL "schema" is roughly the same as a MySQL "database". Having many databases on a PostgreSQL installation can get problematic; having many schemas will work with no trouble. So you definitely want to go with one database and multiple schemas within that database.

kquinn
  • 10,433
  • 4
  • 35
  • 35
  • 110
    "Having many databases on a PostgreSQL installation can get problematic" -- please clarify; is it problematic generally or in this specific case, and why? – akaihola Dec 20 '09 at 12:57
  • 46
    "The most common use case for using multiple schemas in a database is building a software-as-a-service application wherein each customer has their own schema. While this technique seems compelling, we strongly recommend against it as it has caused numerous cases of operational problems. For instance, even a moderate number of schemas (> 50) can severely impact the performance of Heroku’s database snapshots tool" https://devcenter.heroku.com/articles/heroku-postgresql – Neil McGuigan Oct 30 '13 at 21:15
  • 20
    @NeilMcGuigan: Interestingly, that seems to be the opposite conclusion from kquinn's (accepted) answer. – carbocation Mar 17 '15 at 17:54
  • 10
    For those reading it in the end of 2015. There is a `dblink` Postgres extension for querying across databases now (that's a reply to @mattb comment). – Kamil Gosciminski Dec 03 '15 at 08:38
  • 4
    @mattb For those reading it after 2014, Pg has foreign data wrappers starting at v9.3, and in particular the `postgres_fdw` allows querying across Pg databases (IMO better than `dblink`). – Thalis K. Jan 04 '16 at 11:08
  • 9
    Having one database with many schemas will make it virtually impossible to dump a single schema of those, though. I'm running a single postgres database with more than 3000 schemas and pg_dump just fails with an out of memory error if you try to dump a single schema. I wonder if this would be any different had I 3000 databases instead. – Machisuji Feb 06 '17 at 13:41
  • Why can it get problematic? I can see how multiple PostgreSQL clusters on one server can get problematic, but multiple databases (with one schema each, usually just putting stuff into `public`) in one cluster should be fine. – mirabilos Jan 18 '19 at 17:39
  • 6
    Digging into this, this is a rather interesting article about the matter https://influitive.io/our-multi-tenancy-journey-with-postgres-schemas-and-apartment-6ecda151a21f and also https://rob.conery.io/2014/05/28/a-better-id-generator-for-postgresql/ addresses some of the issues you may run into. The first article also has an comment which correlates to many-schema's-many-tables isues from Josh Berkus (https://medium.com/@jberkus/you-dont-say-above-what-version-s-of-postgresql-you-used-e3d84e5ad33) – Paul Feb 13 '19 at 08:55
39

Definitely, I'll go for the one-db-many-schemas approach. This allows me to dump all the database, but restore just one very easily, in many ways:

  1. Dump the db (all the schema), load the dump in a new db, dump just the schema I need, and restore back in the main db.
  2. Dump the schema separately, one by one (but I think the machine will suffer more this way - and I'm expecting like 500 schemas!)

Otherwise, googling around I've seen that there is no auto-procedure to duplicate a schema (using one as a template), but many suggest this way:

  1. Create a template-schema
  2. When need to duplicate, rename it with new name
  3. Dump it
  4. Rename it back
  5. Restore the dump
  6. The magic is done.

I've written two rows in Python to do that; I hope they can help someone (in-2-seconds-written-code, don’t use it in production):

import os
import sys
import pg

# Take the new schema name from the second cmd arguments (the first is the filename)
newSchema = sys.argv[1]

# Temperary folder for the dumps
dumpFile = '/test/dumps/' + str(newSchema) + '.sql'

# Settings
db_name = 'db_name'
db_user = 'db_user'
db_pass = 'db_pass'
schema_as_template = 'schema_name'

# Connection
pgConnect = pg.connect(dbname= db_name, host='localhost', user= db_user, passwd= db_pass)

# Rename schema with the new name
pgConnect.query("ALTER SCHEMA " + schema_as_template + " RENAME TO " + str(newSchema))

# Dump it
command = 'export PGPASSWORD="' + db_pass + '" && pg_dump -U ' + db_user + ' -n ' + str(newSchema) + ' ' + db_name + ' > ' + dumpFile
os.system(command)

# Rename back with its default name
pgConnect.query("ALTER SCHEMA " + str(newSchema) + " RENAME TO " + schema_as_template)

# Restore the previous dump to create the new schema
restore = 'export PGPASSWORD="' + db_pass + '" && psql -U ' + db_user + ' -d ' + db_name + ' < ' + dumpFile
os.system(restore)

# Want to delete the dump file?
os.remove(dumpFile)

# Close connection
pgConnect.close()
Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Strae
  • 18,807
  • 29
  • 92
  • 131
39

I would recommend against accepted answer - multiple databases instead of multiple schemas for this set of reasons:

  1. If you are running microservices, you want to enforce the inability to join between your "schemas", so the data is not entangled and developers won't end up joining other microservice's schema and wonder why when other team makes a change their stuff no longer works.
  2. You can later migrate to a separate database machine if your load requires with ease.
  3. If you need to have a high-availability and/or replication set up, it's better to have separate databases completely independent of each other. You cannot replicate one schema only compared to the whole database.
Alan Sereb
  • 2,358
  • 2
  • 17
  • 31
  • 2
    Totally depends on the service. Please note this is a pretty old question; But the service ended up with the needing to make query between two "microservices" (that wasnt in the initial project). Using the schema made it kinda easy, if I dont remember wrong has been just a matter of configure better the database users's permissions. If we would have chosen the "N databases" way, that would have been a bit harder (but definitely possible) – Strae Jan 12 '21 at 03:15
  • 2
    Nowadays the approach would be different thougt, probably exposing some kind of API mantaining the database/schema totally separated. – Strae Jan 12 '21 at 03:16
  • @Strae, you are right, it's an old question, however, I just want to bring it back up and was hoping to get some insight into the same question. I did some research and decided to put in my 10 cents. – Alan Sereb Jan 12 '21 at 15:04
  • 1
    Yep and youre welcome to do so! After my experience is that (well for my situation) the difference wasnt much; using 1 db with multiple schemas helped with backups and cross-schema queryes – Strae Jan 14 '21 at 05:50
  • 7
    My favourite answer. We shouldn't assume that allowing cross-schema queries is a good thing, in fact we should begin with the opposite assumption! – Ronnie Apr 21 '22 at 13:23
22

I would say, go with multiple databases AND multiple schemas :)

Schemas in PostgreSQL are a lot like packages in Oracle, in case you are familiar with those. Databases are meant to differentiate between entire sets of data, while schemas are more like data entities.

For instance, you could have one database for an entire application with the schemas "UserManagement", "LongTermStorage" and so on. "UserManagement" would then contain the "User" table, as well as all stored procedures, triggers, sequences, etc. that are needed for the user management.

Databases are entire programs, schemas are components.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
  • 4
    ... and so i'll have 1 database, with inside the schemas: $customer1_user_schema, $customer2_user_schema, $customer3_user_schema, $customer1_documents_schema, $customer2_documents_schema, $customer3_documents_schema? Mh... dont seem a reliable way... and what about performance? And what about the code of my application (will be php and python)? so many schemas.. – Strae Jul 20 '09 at 15:22
  • 7
    @Strae: I'm reading this as: each customer has it's database customer1_database, customer2_database and within those databases you have user_schema, documents_schema. – frankhommers Dec 12 '16 at 22:02
10

In a PostgreSQL context I recommend to use one db with multiple schemas, as you can (e.g.) UNION ALL across schemas, but not across databases. For that reason, a database is really completely insulated from another database while schemas are not insulated from other schemas within the same database.

If you -for some reason- have to consolidate data across schemas in the future, it will be easy to do this over multiple schemas. With multiple databases you would need multiple db-connections and collect and merge the data from each database "manually" by application logic.

The latter have advantages in some cases, but for the major part I think the one-database-multiple-schemas approach is more useful.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
emax
  • 307
  • 3
  • 7
6

A number of schemas should be more lightweight than a number of databases, although I cannot find a reference which confirms this.

But if you really want to keep things very separate (instead of refactoring the web application so that a "customer" column is added to your tables), you may still want to use separate databases: I assert that you can more easily make restores of a particular customer's database this way -- without disturbing the other customers.

Peter Mortensen
  • 30,738
  • 21
  • 105
  • 131
Troels Arvin
  • 6,238
  • 2
  • 24
  • 27
0

It depends on how the availability and connectivity of your system is designed. What are the data that are stored in these Databases.If they are linked data, there they can be kept on single DB instance but if they are partially linked and can run partially if one system is down then it must be on different instances.

Detailed explanation:-

1) When you use one DB instance and in that you use multiple databases, then you are caught up with the issue that if your connection goes down(due to system crash or mysql server is down),all Databases as they are on same instance are also down, so all your applications are impacted.

2) When you separate DB instance for each Database,then if any one Database system is down,your other applications doesn't have impact.So other application can run only the application which depends on down DB is impacted.

Also,in both the cases i think you must also use replication mechanism so that load balancing can be done on slave Databases.

Naruto
  • 4,221
  • 1
  • 21
  • 32
0

Working with single Database with multiple Schemas is good way to practice in postgres database because:

  1. No any data is shared across databases in postgres.
  2. any given connection to the server can access only the data in the single database, the one specified in the connection request.

With using multiple schemas:

  1. To allow many users to use one database without interfering with eachother.
  2. To organize database objects into logical groups to make them more manageable.
  3. Third party applications can be put into separate schemas so they cannot collide with the names of other objects.
samzna
  • 405
  • 4
  • 8