1

i need a little help here because i'm struggling a little bit to find the best solution for my problem. i googled and dont have any enlightening answer.

So, first of all, i'll explain the idea.

1 - i've a java application that insert data in my database (Oracle DB) using jdbc.

2 - My database is logically splited in two. One part that contains table with exported information (from another application) and another part with table that represents some reports.

3 - my java app only insert information in export table.

4 - I've developed some packages that makes the transformation of data from export table to report table (generate some reports).

5 - This packages are scheduled to execute 2, 3 times a day

So, my problem is that when transformation task starts, i want to prevent new DML operations. Then, when transformation stops, all new data that was supposed to be inserted/updated during that time, shall be inserted again in the export tables.

i tought in two approaches:

1 - during transformation time deviate the DML ops to temporary table

2 - lock the tables but i've not so many experience using this. My main question is, can i force DML operations in jdbc to wait until the lock is finished? Not tried yet, but read here and there that after some that is thrown a lockwaittimeout exception or something like that.

Can anyone more experienced give me some advices?

Any doubts on what i'm trying to do just ask.

andrealmeida
  • 91
  • 1
  • 3
  • 13

1 Answers1

1

Do not try locking tables as a solution. Sadly, that is common but rarely necessary. Just a few ideas:

  • at start of transformation select * data from export table into global_temp table. Then execute your transformation packages on that temp table
  • create a materialized view like select * data from export table. Investigate the options to refresh on commit but it seems you require to refresh the table just before your transformation
  • analyze your exported data. If it is like many other cases most of the data will never change once imported. Only new data needs to be analyzed. To aid in processing add a timestamp field called date_last_modified and a trigger on the table. When a row is updated then update the date_last_modified. This allows you to choose the smallest data set possible of "only changed records"
  • you should also investigate using bulk collect to optimize your cursor. This will allow you get a group of records all at once, sort of a snapshot of the data at a point in time
  • I believe you are over thinking this. If you get a group of records one at a time then Oracle will get the state of the record as of the last commit by any user. If you bulk collect a group of records they go into memory and will, again, represent the state as of a point in time.

The best way to feel more comfortable about this is to set up a test case. Set up a cursor that sleeps during every processing cycle. Open another session and change the data that is being processed. See what happens....

Community
  • 1
  • 1
kevinskio
  • 4,431
  • 1
  • 22
  • 36