3

The following link on the PostgreSQL documentation manual http://www.postgresql.org/docs/8.3/interactive/populate.html says that to disable autocommit in postgreSQL you can simply place all insert statements within BEGIN; and COMMIT;

However I have difficulty in capturing any exceptions that may happen between the BEGIN; COMMIT; and if an error occurs (like trying to insert a duplicate PK) I have no way to explicitly call the ROLLBACK or COMMIT commands. Although all insert statements are automatically rolled back, PostgreSQL still expects an explicit call to either the COMMIT or ROLLBACK commands before it can consider the transaction to be terminated. Otherwise, the script has to wait for the transaction to time out and any statements executed thereafter will raise an error.

In a stored procedure you can use the EXCEPTION clause to do this but the same does not apply in my circumstance of performing bulk inserts. I have tried it and the exception block did not work for me because the next statement/s executed after the error takes place fails to execute with the error:

ERROR:  current transaction is aborted, commands ignored until end of transaction block

The transaction remains open as it has not been explicitly finalised with a call to COMMIT or ROLLBACK;

Here is a sample of the code I used to test this:

BEGIN;
  SET search_path TO testing;
  INSERT INTO friends (id, name) VALUES (1, 'asd');
  INSERT INTO friends (id, name) VALUES (2, 'abcd');
    INSERT INTO friends (id, nsame) VALUES (2, 'abcd'); /*note the deliberate mistake in  attribute name and also the deliberately repeated pk value number 2*/
EXCEPTION /* this part does not work for me */
    WHEN OTHERS THEN
        ROLLBACK;   
COMMIT;

When using such technique do I really have to guarantee that all statements will succeed? Why is this so? Isn't there a way to trap errors and explicitly call a rollback?

Thank you

prince
  • 671
  • 2
  • 11
  • 26

2 Answers2

2

if you do it between begin and commit then everything is automatically rolled back in case of an exception. Excerpt from the url you posted: "An additional benefit of doing all insertions in one transaction is that if the insertion of one row were to fail then the insertion of all rows inserted up to that point would be rolled back, so you won't be stuck with partially loaded data."

FooLman
  • 457
  • 4
  • 11
  • All statements are rolled back and that is right; However the transaction is not considered as finalised and any statements that are planned to execute after that are not executed because the above mentioned error is raised with every new statement raised; – prince Mar 30 '12 at 15:24
  • What do you plan to accomplish? If there are other data that should be inserted regardless of the previous failures, you should add them in a different begin-commit block. – FooLman Mar 30 '12 at 15:34
0

When I initialize databases, i.e. create a series of tables/views/functions/triggers/etc. and/or loading in the initial data, I always use psql and it's Variables to control the flow. I always add:

\set ON_ERROR_STOP

to the top of my scripts, so whenever I hit any exception, psql will abort. It looks like this might help in your case too.

And in cases when I need to do some exception handling, I use anonymous code blocks like this:

DO $$DECLARE _rec record;
BEGIN
FOR _rec IN SELECT * FROM schema WHERE schema_name != 'master' LOOP
    EXECUTE 'DROP SCHEMA '||_rec.schema_name||' CASCADE';
END LOOP;
EXCEPTION WHEN others THEN
NULL;
END;$$;
DROP SCHEMA master CASCADE;
vyegorov
  • 21,787
  • 7
  • 59
  • 73