0

I have a web app with thousands of schemas (one per user).

I have seen this post

Hibernate and Multi-Tenant Database using Schemas in PostgreSQL

where it is mentioned the search_path approach to querying a particular table in a schema.

I am wondering how to query all the schema at the same time in postgres?

Do I need to implement a python script (for example) to make queries one by one and then gather the results or can I implement that directly in SQL to gather all the queries?

For example if I want a table with userid and number_of_posts and I want to query these fields on each user with a separate schema, how would i write that in SQL? I don't see how to change the search path in a unique query automatically.

Is there a command in postgres to list all the existing schemas?

Thanks a lot

  • SELECT * FROM information_schema.schemata; However, I can not imagine why you would need one schema per user. A data model like that creates imho a huge amount of problems and this is just one of them. – Frank Heikens Apr 26 '22 at 08:34
  • thank you. It's a common database architecture for managing users data when you need data isolation from what i've read on the web... –  Apr 26 '22 at 10:21
  • Different source, different opinion... We handle millions of users and by law they are not allowed to see each others data. But different schemas would be the worst possible solution. A simpel "WHERE user = ?" fixes the problem. – Frank Heikens Apr 26 '22 at 10:42
  • you seem experienced from what i read in your profile...what would you suggest? your where user=? only deals with queries right? –  Apr 26 '22 at 11:07
  • Yes, the access issue to data is solved in the SQL you use for all your CRUD operations. And you don't need DDL anymore when a new customers comes in, you don't have to create a schema, nor tables, indexes and other stuff for this user. – Frank Heikens Apr 26 '22 at 13:15
  • Isnt there some isolation regulations stricter than others and that would be the reason to use schemas on a large scale? you think putting all the users in a big table is good enough? what about scale? normalization, indexing, partitioning and finetuning pg settings (pgbounce, buffer, cache, max connection) is enough ? –  Apr 26 '22 at 17:49

0 Answers0