1

I am creating a master database using SQLite. This single file contains a dozen tables. I want everybody in my group to have access to it (either through Python or through the SQLite command line) and I was thinking of simply putting the file in a group readable directory and calling it 'master.db'.

Now, the buzz word 'ODBC' comes to mind.

I remember my old Windows days (thank God they're gone), when I could "register" and Excel spreadsheet into my "data sources"; it would be made visible from within any application.

Can I take advantage of this in the Linux world? Does this make any sense?

Many thanks.

Escualo
  • 40,844
  • 23
  • 87
  • 135
  • 3
    Why SQLite? Why not MySQL or Postgresql or some database engine designed specifically for multiple users? – S.Lott May 21 '10 at 21:02
  • 1
    @S.Lott: because it does not require a server process and the files can be backed up effortlessly. Given our necessities, we wanted a zero-configuration database; the simpler the better. – Escualo May 21 '10 at 21:54
  • 2
    I wouldn't use SQLite for a multi-user database as it only allows one process at a time writing to a table. At least I would use something like SQLAlchemy to access the database so you can easily switch to another backend. AFAIK SQLAlchemy supports ODBC, too. – Florian Diesch May 21 '10 at 22:11
  • MySQL can be backed up effortlessly, is often pre-installed on Linux. "zero-configuration" doesn't really make sense since you have to configure ODBC. If you're going to configure ODBC over a public, shared file, why not configure one central MySQL DB? – S.Lott May 22 '10 at 11:03

2 Answers2

1

If it's only small numbers of people who access you db then you should be ok. However, SQlite sucks at large numbers of concurrent accesses - I switched over from sqlite to mysql when I had lots of concurrent access because the db kept getting locked.

For many concurrent accesses, you need something in between it that takes the brunt of the requests, like a (small) webserver or even a python program that can take all those requests and can regulate the access to the sqlite db. Use an ORM like SQLAlchemy or SQLObject that can access sqlite in a threadsafe way.

As for using odbc on linux to make sqlite accessible as a data source to the linux users, while ODBC certainly works on linux, and an sqlite odbc driver exists ( http://www.ch-werner.de/sqliteodbc/ ) I don't think you can 'register' your sqlite via odbc into a central list of data sources - that's pure windows...

Alex Boschmans
  • 515
  • 2
  • 12
  • Please define "lots" of concurrent access. Since SQLite locks the whole database for a write; a few writers may not interfere much with each other. But how many became a problem for you? – S.Lott May 22 '10 at 11:05
  • At that time I was using threads to query a webservice and store the results into the sqlite database. I had 4 threads running and ran into "database is locked" when I had more than 10 keywords queueing up (which is strange, methinks, as I did not change the nr of threads, just the number of keywords to look for). I found a post similar to http://stackoverflow.com/questions/524797/python-sqlite-and-threading that discussed this and recommended switching to mysql. I did that and have had no more problems. – Alex Boschmans May 25 '10 at 21:37
0

First off, here's the wikipedia article for ODBC. It's a standard software API for interacting with a database. However, I doubt you need to use it if you want to use SQLite & Python together. Python has a built-in module called sqlite3 that should help you along the way. Good Luck.

Ben Hayden
  • 1,349
  • 9
  • 15