4

A question from a SQL and database newbie:

I have read some articles about db concurrency, but simultaneous update is the most frequently thing described there.

However, I'm interested in only fetching the information from the database.

So, here are the questions:

Is it possible to simultaneously do many SELECTs from a database? Will these SELECTs interfere? Is case 2 possible?

Usual case (1):

  • [Time elapsed: 0 seconds]
  • SELECT something1 FROM table1
  • Get result for the first SELECT
  • [Time elapsed: 1.5 seconds]
  • SELECT something2 FROM table1
  • Get result for the second SELECT
  • [Time elapsed: 3 seconds]

A case with simultaneous queries (2):

  • [Time elapsed: 0 seconds]
  • SELECT something1 FROM table1
  • [Time elapsed: 0.001 seconds]
  • SELECT something2 FROM table1
  • Get result for the first SELECT
  • [Time elapsed: 1.5 seconds]
  • Get result for the second SELECT
  • [Time elapsed: 1.5001 seconds]
Kirill
  • 867
  • 2
  • 11
  • 14

4 Answers4

6

yup, it is possible as long as you have two separate connection opened and you run your two select statements through separate connections.

bpgergo
  • 15,669
  • 5
  • 44
  • 68
  • 1
    Great! As I have read from this article (http://en.wikipedia.org/wiki/Database_connection), opening a connection is quite an expensive operation, isn't it? And as I understand this problem can be solved with connection pooling. Am I right? – Kirill Jun 23 '11 at 15:47
  • Yes, opening a new connection is relatively expensive. And yes, pooling _does_ solve this problem, as your application can reuse already opened connections. – bpgergo Jun 24 '11 at 09:23
  • Normally connection overhead doesn't become significant enough that adding a connection pooler is worthwhile until you have hundreds of active connections at once. – Greg Smith Jun 25 '11 at 01:24
0

Now with PostgreSQL 9.6+, parts of the SQL Query can be parallelized, with nearly zero effort from the user (no DBLink / no specialized query tuning).

Read more here.

Community
  • 1
  • 1
Robins Tharakan
  • 2,209
  • 19
  • 17
0

PostgreSQL coordinates multiple statements executing at the same time using an approach named MVCC. For the most common cases, multiple readers or writers co-exist without any interference with one another. It is possible that two sessions doing a mix of read and write operations can get in each other's way. For example, a second session trying to UPDATE a row already UPDATEd by a still executing transaction will block waiting for the first session to either commit or abort.

There are some other situations where two sessions doing something similar will interact, but these are done to improve performance. For example, if one session is doing a giant sequential scan query of an entire table, and then a second session starts to do the same thing, the second one will join in to share the work being done by the first one. This is all transparent to you, but can dramatically speed up how fast each query finishes.

The main thing to be concerned about when having multiple connections going at once is when you start doing UPDATE and DELETE statements that take locks on items in the database. These can easily interfere with one another, and what happens is that the second statement will often end up waiting for the first to finish before it proceeds. This is also covered in the documentation chapter I mentioned already.

Greg Smith
  • 16,965
  • 1
  • 34
  • 27
-1

Yes you can by adding this hint :

/*+ PARALLEL(table) */ * from table;

Or

/*+ PARALLEL(table 4) */ * from table; 

=> where 4 is number_of_workers;

You can show the allowed max workers by :

SHOW max_worker_processes;