3

I'm performing an update via a method using Hibernate and the EntityManager.

This update method is called multiple times (within a loop).

It seems like when I execute it the first time, it locks the table and does not free it.

When trying to update the table via SQL Developer after having closed the application, I see the table is still locked because the update is hanging.

What do you see as a solution to this problem? If you need more information, let me know.

Class

@Repository
@Transactional(propagation = REQUIRES_NEW)
public class YirInfoRepository {

    @Autowired
    EntityManager entityManager;

    @Transactional(propagation = REQUIRES_NEW)
    public void setSent(String id) {
        String query = "UPDATE_QUERY";
        Query nativeQuery = entityManager.createNativeQuery(String.format(query, id));
        nativeQuery.executeUpdate();
    }
}

UPDATE

After having waited more than one hour, I launched the application again and it worked fine once but now again, it hangs.


UPDATE 2 -- I'll give a maximum bounty to whoever helps me solve this

On another place I use an application managed entity manager and it still gives me the same type of errors.

public void fillYirInfo() {
    File inputFile = new File("path");
    try (InputStream inputStream = new FileInputStream(inputFile);
         BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(inputStream))) {
        bufferedReader.lines().skip(1).limit(20).forEach(line -> {
            String[] data = line.split(",");
            String rnr = data[0];
            String linked = data[1];
            String email = data.length > 2 ? data[2] : "";



    String insuredId = insuredPeopleRepository.getInsuredIdFromNationalId(rnr);
            int modifiedCounter = 0;

            if (!isNullOrEmpty(insuredId)) {
                EntityManager entityManager = emf.createEntityManager();
                EntityTransaction transaction = entityManager.getTransaction();
                Query nativeQuery = entityManager.createNativeQuery(
                        "QUERY"
                );
                transaction.begin();
                nativeQuery.executeUpdate();
                entityManager.flush();
                transaction.commit();
                entityManager.close();
            }

            System.out.println(modifiedCounter + " rows modified");
        });
    } catch (IOException e) {
        e.printStackTrace();
    }
}
Yassin Hajaj
  • 21,337
  • 9
  • 51
  • 89
  • Looks like you are not commiting your transactions. No idea how to do that in Hibernate. Locks in Oracle is done by row level, unless you lock the entire object doing DDL, or having exclusive lock. – Fábio Galera Mar 20 '18 at 16:13
  • @FábioGalera This is a container managed transaction as I understand it so it should be committed by the container automatically when the method is finished. – Yassin Hajaj Mar 20 '18 at 16:15
  • @FábioGalera Strange is the updates are made on different rows. – Yassin Hajaj Mar 20 '18 at 16:16
  • Why are you using @Transactional at both Class and method Level ? https://stackoverflow.com/questions/23132822/what-is-the-difference-between-defining-transactional-on-class-vs-method – MyTwoCents Mar 20 '18 at 16:28
  • @Ashish451 Because I was out of options. I first tried it on the method. – Yassin Hajaj Mar 20 '18 at 16:37
  • Could you share your UPDATE statement ? Also, when you see the locks, could you share the v$lock information ? – Fábio Galera Mar 20 '18 at 16:51
  • why are you marking repository methds as `@Transactional` ? CRUD methods on repository instances are transactional by default means it has propagation as Required. – Amit Naik Mar 20 '18 at 23:45
  • In your second update, it looks like you are lacking some error handling, if `executeTransaction` throws an exception the transaction is not rolled back. Shouldn't be the cause of the behaviour you are seeing, but might lead to connection leaks.. – Tobb Mar 22 '18 at 16:38
  • You could try to enable debug/trace logging from both spring-tx and Hibernate, might be some clues there.. At least you will be able to see the last attempted thing before things stop, if that's a query you could try to run it directly in the database. – Tobb Mar 22 '18 at 16:39
  • 1
    Also, running update queries natively is not the preferred way of using Hibernate/JPA. You should load the entities from the database, mutate them in your Java code, as long as this is done within a transaction, Hibernate will make sure the changes are written to the database when the transaction commits. Doing updates directly might mess with Hibernate's caches, which might lead to strange behaviour. – Tobb Mar 22 '18 at 16:42
  • Not sure why the transactions aren't releasing for you, but I've encountered Oracle table-locking before due to the `initans` value on the table. This value controls the number of locks allowed per page, and once they run out Oracle may lock the table instead. Changing the initrans value can mean restructuring table pages, so may take a while for a large table. – df778899 Mar 22 '18 at 20:35
  • Are you sure you need REQUIRES_NEW propagation on each call for update? What is your transaction configuration? Also do you have any create/update strategy within your application/entity manager instantiation (import.sql in persistence.xml or some sort of liquibase or something like that). table is locked only during DDL queries, not specific updates. Thus there might be problem with your instantiation. Can you share your configuration? – Ilya Dyoshin Mar 23 '18 at 19:13
  • for me i face with similar case that a job lock record – Amir Azizkhani Oct 06 '21 at 11:23

3 Answers3

2

Try without an update-query:

@Repository
@Transactional(propagation = REQUIRES_NEW)
public class YirInfoRepository {

    @Autowired
    EntityManager entityManager;

    @Transactional(propagation = REQUIRES_NEW)
    public void setSent(String id) {
        //guessing your class name and method..
        final YirInfo yirInfo = entityManager.find(YirInfo.class, id);
        yirInfo.setSent();
    }
}

Might not be as fast as a single update query, but it's possible to get it reasonably fast, unless the amount of data is huge. This is the preferred way of using Hibernate/JPA, instead of thinking in terms of single values and SQL queries, you work with entities/objects and (sometimes) HQL/JPQL queries.

Tobb
  • 11,850
  • 6
  • 52
  • 77
1

You are using @Transactional annotation. This means you are using Spring Transaction. Then in your UPDATE 2 you are using transaction by yourself and managed by spring (I guess it's another project or class not managed by Spring).

In any case what I would do is to try to update your records in single spring transaction and I'd not use @Transactional in DAO layer but in service layer. Something like this:

Service layer:

@Service
public class YirInfoService {
    @Autowired
    YirInfoRepository dao;

    @Transactional(propagation = REQUIRES_NEW)
    public void setSent(List < String > ids) {
        dao.setSents(ids);
    }
}

DAO layer:

@Repository
public class YirInfoRepository {

    @Autowired
    EntityManager entityManager;

    //Here you can update by using and IN statement or by doing a cycle
    //Let's suppose a bulk operation
    public void setSents(List < String > ids) {
        String query = "UPDATE_QUERY";
        for (int i = 0; i < ids.size(); i++) {
            String id = ids.get(i);
            Query nativeQuery = entityManager.createNativeQuery(String.format(query, id));
            nativeQuery.executeUpdate();
            if (i % 20 == 0) {
                entityManager.flush();
                entityManager.clear();
            }
        }
    }
}
kakabali
  • 3,824
  • 2
  • 29
  • 58
Angelo Immediata
  • 6,635
  • 4
  • 33
  • 65
  • Hello, are the calls to `entityManager.flush()` and `entityManager.clear()` necessary? Will the lock happen if we do not do that? – Bằng Feb 26 '20 at 13:14
  • I%20 indicates that if 20 records have been processed and ready for update "flush" these changes to the DB. Consequently, it will also clear the java memory of these objects. In that sense, it is flush, clear are necessary because you are essentially batching 20 records lest you have thousands of objects in memory ready to be updated but before sending the batched update command, your memory chokes. It doesn't have anything to do with locks per se As It hasn't yet reached the database level. – veritas Jun 09 '21 at 03:27
0

The first thing you have to understand is that for the first example, you are using a native query to update rows in the DB. In this case you are completely skipping Hibernate to do anything for you.

In your second example, you have the same thing, you are updating via an update query. You don't need to flush the entity manager as it's only necessary for transferring the pending changes made to your entity objects within that entity manager.

Plus I don't know how your example works as you are autowiring the entity manager and not using the @PersistenceContext annotation. Make sure you use this one properly because you might have misconfigured the application. Also there is no need to manually create the entity manager when using Spring as it looks in the second example. Just use @PersistenceContext to get an entity manager in your app.

You are also mixing up transaction management. In the first example, it's enough if you put the @Transactional annotation to either of your method or to the class.

For the other example, you are doing manual transaction management which makes no sense in this case. If you are using Spring, you can simply rely on declarative transaction management.

The first thing I'd check here is to integrate datasource-proxy into your connection management and log out how your statements are executed. With this info, you can make sure that the query is sent to the DB side and the DB is executing it very slowly, or you are having a network issue between your app and db.

If you find out that the query is sent properly to the DB, you want to analyze your query, because most probably it's just executed very slowly and needs some optimizations. For this, you can use the Explain plan feature, to find out how your execution plan looks like and then make it faster.

Arnold Galovics
  • 3,246
  • 3
  • 22
  • 33