20

In one of the stackoverflow podcasts (#18 I think) Jeff and Joel were talking about multi vs single tenant databases. Joel mentioned that "FogBugz on Demand" used a database per customer architecture and I was wondering if there is a point beyond which you'll need to have multiple database servers to distribute the load?

Darren
  • 961
  • 2
  • 9
  • 17
  • 2
    We have over 2,300 on ours and it works great. – Beep beep Mar 14 '14 at 16:48
  • I found this discussion over on [dba.stackexchange.com](https://dba.stackexchange.com/questions/184617/is-there-any-limit-to-the-number-of-databases-you-can-put-on-one-sql-server) seemingly relevant and very interesting – Wayne Phipps Dec 11 '18 at 13:47

5 Answers5

14

Technically the limit of databases per instance in SQL Server is 32,767, but I doubt that you could use a SQL Server instance that has more than 2,000 databases, at that point the server would probably be not responsive.

You may be able to have close to 30,000 databases if they were all auto-closed and not being used. you can find more information about capacity limits here:

Maximum Capacity Specifications for SQL Server

Christian C. Salvadó
  • 807,428
  • 183
  • 922
  • 838
  • We have 2,300 on ours and it works like a charm, although we never have more than about 100 open at any one time. – Beep beep Mar 14 '14 at 16:48
  • Sorry to comment on old discussion. In the attached link it said "User databases per instance of SQL Server, including data-tier applications = 50". It seems very limited. Am I misreading it? – Mark Nov 19 '15 at 02:55
  • That's under "The following table specifies the maximum sizes and numbers of various objects that were tested in the SQL Server Utility." heading. Under "Database Engine Objects" the "Databases per instance of SQL Server" is 32,767. The table you're looking at represents what was tested with SQL Server Utility, not what the maximum limit it – AaronHS Feb 05 '16 at 05:24
  • Commenting too on an old post, we have 12,000+ running in a SQL Server 2014 Standard. Works great, load is pretty light on individual databases. However, some tools have a hard time working with such amount of databases (We use SQL Multi-Script to deploy database changes and it's kinda slow building the distribution list) – MaxiWheat Jun 04 '19 at 18:17
  • Would love to know where the 2000 number comes from and if it's still reasonable in 2023. – Emperor Eto Feb 16 '23 at 11:24
4

Joel has talked about this at another place (sorry, no reference handy) and said that before switching to MS SQL 2005 the management console (and the backend) had problems attaching more than 1000 or 2000 databases. It seems that 2005 and probably 2008 again improved on these numbers.

As for all performance questions are always dependent on your actual hardware and workload an can only definitely answered by local benchmarking/system monitoring.

David Schmitt
  • 58,259
  • 26
  • 121
  • 165
3

I'd think mostly it depends on the memory limitations of the machine. SQL Server likes to keep as much cached in memory as possible, and as you add databases you reduce the amount of memory available.

Joel Coehoorn
  • 399,467
  • 113
  • 570
  • 794
  • That should not be a factor. You should always be developing and optimizing against a SQL instance with artificially limited memory for this exact reason. – Emperor Eto Feb 16 '23 at 11:29
3

In addition, you might want to consider the number of connections to a SQL Server. After 500-1000, it gets very cloggy and slow. So that is a limitation as well.

Jonas Lincoln
  • 9,567
  • 9
  • 35
  • 49
1

I think it is more a question of the load on the databases. As was said above, if there is no load then 32,767. With a high load then it comes down, eventually to 1 or less than 1.

Craig
  • 36,306
  • 34
  • 114
  • 197