This appears to be impossible in practical terms (i.e. without a very expensive brute-force approach along the lines of dan04's rather brilliant answer).
The source (1, 2) for the sqlite3
module contains no reference either to SQLITE_MAX_COLUMN
or to compile-time limits in general; neither does there appear to be any way to access them from within the SQL interface.
UPDATE:
A simple modification of dan04's solution to use a binary search speeds things up considerably:
import sqlite3
def max_columns():
db = sqlite3.connect(':memory:')
low = 1
high = 32767 # hard limit <http://www.sqlite.org/limits.html>
while low < high - 1:
guess = (low + high) // 2
try:
db.execute('CREATE TABLE T%d (%s)' % (
guess, ','.join('C%d' % i for i in range(guess))
))
except sqlite3.DatabaseError as ex:
if 'too many columns' in str(ex):
high = guess
else:
raise
else:
low = guess
return low
Running the code above through timeit.repeat()
:
>>> max_columns()
2000
>>> import timeit
>>> timeit.repeat(
... "max_columns()",
... setup="from __main__ import max_columns",
... number=50
... )
[10.347190856933594, 10.0917809009552, 10.320987939834595]
... which comes to an average run-time of 30.76 / 150 = 0.205 seconds (on a 2.6 GHz quad-core machine) - not exactly fast, but likely more usable than the 15-20 seconds of the "kick it 'til it breaks" counting-from-one method.