0

I think I might be noticing deadlock kind of situation while trying to persist objects concurrently using @Async annotation.

I used a component like

@Component
public class AsyncInserter{
    @Autowired
    private PersonRepository repository;

    @Async
    @Transactional
    public CompletableFuture<Boolean> insert(List<Person> persons){

          repository.saveAll(persons);
          repository.flush();
          return CompletableFuture.completedFuture(Boolean.TRUE);
    }

I'm calling this from a service layer which is defined as @Component public class PersonServiceImpl{

      @Autowired
      private AsyncInserter asyncInserter;

      @Transactional
      public void performDBOperation(List<Person> persons){

           deletePersons(/** some criteria */);
            List<List<Person>> subPersonList = Lists.partition(persons, 100);
            subPersonList.forEach(list->{ statuses.add(asyncInserter.insert(list));});

      }

As you noticed, I have a delete and Insert (in concurrent ), which I wanted to be atomic completely.

But what I noticed is a delete and a bunch of inserts are submitted but never committed.

I think, there is some lock that is preventing and I can isolate this when running in concurrent threads. Per https://dzone.com/articles/spring-and-threads-transactions, it seems when a new thread is created, the outer transaction is not propagated to the newly created thread, and it creates a new Transaction.

This design seems to have a flaw. Do I need to commit delete first before submitting for Inserts? But how do I achieve atomic operation, if one of the inserts fails

I modified the code to run both delete and insert in a single thread, and it works, which is expected anyway. But what I noticed is after inserting records, it takes a longer time to commit. Usually, 12 secs more to commit 10K records. Which is a huge overhead? Is there any way to improve this?

By the way, I'm using Hikari Connection pool.

Thanks

Saeed Hassanvand
  • 931
  • 1
  • 14
  • 31
DBreaker
  • 319
  • 1
  • 4
  • 24
  • This might seem unrelated but performing your db actions asynchronously here is unlikely to be faster. In any case it is not safe to share connections across threads https://docs.oracle.com/javadb/10.8.3.0/devguide/cdevconcepts89498.html If you want to speed things up you should examine the sql that is outputted and depending on the DB write explicit batch insert sql. – Deadron Nov 02 '18 at 17:34
  • May be I dont understand this clearly, At what point is same connection being used across different db operations. I thought a separate thread opens a new transaction on new connection – DBreaker Nov 02 '18 at 17:47
  • Oh, I thought you were saying you wanted to have the transaction span threads. – Deadron Nov 02 '18 at 17:52
  • https://stackoverflow.com/questions/24916104/how-do-i-properly-do-a-background-thread-when-using-spring-data-and-hibernate/24917195#24917195 – Deadron Nov 02 '18 at 17:53
  • Well initially I thought so, that a Transaction can span across threads. But evidently Its not the case, as I experienced and also read in the dzone article I referred. My intention is to achieve faster DB operation (both delete and insert) of huge data >10K or more. So I thought concurrent insertion would help. But I don't know, how to achieve atomic nature of both delete and concurrent inserts together. – DBreaker Nov 02 '18 at 17:55
  • Batching is usually the answer to faster database interactions. Many databases have batch insert syntax. Any indexes/triggers on the table can also add to insertion times. You should benchmark inserts against your db independently of your code to get a good idea of the maximum possible performance. Baring any major mistakes in your app code the database configuration will determine your maximum insertion performance. – Deadron Nov 02 '18 at 18:10

0 Answers0