5

I'm seeing something odd when I run a query in an application deployed in Oracle Application Server 10.1.3, with Oracle10g.

When I run a statement against the database directly (e.g. a standalone app that calls a DAO implemented with hibernate) I see the following:

select 
    documentco0_.CONTENT_ID as CONTENT1_63_0_, 
    documentco0_.TSTAMP as TSTAMP63_0_, 
    documentco0_.CONTENT as CONTENT63_0_ 
from 
    MySchema.MyTable documentco0_ 
where 
    documentco0_.CONTENT_ID=? 

[main] TRACE org.hibernate.type.LongType - binding '1768334' to parameter: 1 
[main] TRACE org.hibernate.type.TimestampType - returning '2013-08-05 17:31:32' as     column: TSTAMP63_0_ 
[main] TRACE org.hibernate.type.BinaryType - returning '7f587f608090cac6c9c68081818180b380b380807f5b80c3807f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f7f40808b8880918091818191807f44809f8080818581818181818180808080808080808182838485868788898a8b7f44803590808281838382848385858484808081fd8182838084918592a1b1c18693d1e187a2f194b201112188a3c2314195d25170a4b3e2f202898a969798999aa5a6a7a8a9aab4b5b6b7b8b9bac3c4c5c6c7c8c9cad3d4d5d6d7d8d9dae3e4e5e6e7e8e9eaf3f4f5f6f7f8f9fa030405060708090a12131415161718191a22232425262728292a32333435363738393a42434445464748494a52535455565758595a6162636465666768696a7172737475767778797a7f5a808881818080bf80fef947bf520c730eff25ada7bd007c7f807a460efd87677f805625220aab7f59' as column: CONTENT63_0_ 

The same DAO operation when run within the application server however returns the following:

select 
    documentco0_.CONTENT_ID as CONTENT1_63_0_, 
    documentco0_.TSTAMP as TSTAMP63_0_, 
    documentco0_.CONTENT as CONTENT63_0_ 
from 
    MySchema.MyTable documentco0_ 
where 
    documentco0_.CONTENT_ID=? 

2013-08-06 12:49:46,484 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:133 nullSafeSet()) - binding '1768334' to parameter: 1 
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '2013-08-05 17:31:32' as column: TSTAMP63_0_ 
2013-08-06 12:49:46,500 TRACE [AJPRequestHandler-RMICallHandler-12] myuser:4 (NullableType.java:172 nullSafeGet()) - returning '80d48081818c808080818080808180808099ff0c809a5c9d809a5c9c80828082808080817f587f608090cac6c9c68081808080804818f7ef8081808080808080808080808080808080808080809a5c9c83408c508081' as column: CONTENT63_0_ 

You can see that the identifier and timestamp are the same in both cases, but the content blob is different: 360 bytes in the first case and 86 bytes in the second case.

The stand-alone application uses a BasicDataSource, while the application on the server uses a JNDI data source. I have verified that the BasicDataSource contains the same JDBC url that is used in the JNDI data source. Both data sources use the same credentials.

The database operation in the application server has a different trace output, using NullableType::nullSafeGet() to display information instead of org.hibernate.type tracing. I'm not sure if that is relevant.

Is there something obvious that I am overlooking here? I can't see why I am getting different results when running the same query on the same database.

edit: on OAS I have configured a JDBC ConnectionPool, that uses connection factory class oracle.jdbc.pool.OracleDataSource, and the JDBC data source is a managed data source pointing to that connection pool.

I'm thinking there may be an issue with different Oracle JDBC drivers? The BasicDataSource for the stand-alone app uses the JDBC driver oracle.jdbc.driver.OracleDriver and the dialect org.hibernate.dialect.Oracle10gDialect. I can't see any place in OAS administration that shows the equivalent values.

John Q Citizen
  • 3,138
  • 4
  • 26
  • 31
  • From what I understand, you are based on the log tracing to see the difference. Bout if you process the Blob field and examine the contents, are the different too ? Maybe it's just a difference on their "toString" representations – Cristian Meneses Aug 06 '13 at 03:29
  • 1
    Thanks for the reply. In the source code when the entity is loaded from Hibernate the blob is a byte array of 86 bytes, exactly as per the logging. I verified the values of this byte array using a debugger as well. So it's not a logging issue. The same query yields different results. – John Q Citizen Aug 06 '13 at 03:55
  • I've made some findings... I'll post it as an answer – Cristian Meneses Aug 06 '13 at 04:07

1 Answers1

1

Please have a look at this article

Looks like, for some reason, OAS returns only 86 bytes of the BLOB value, unless you specify an Lob handler on your configuration.

You can also have more info on this thread of CodeRanch describing the same issue

Hope this helps!

Cristian Meneses
  • 4,013
  • 17
  • 32
  • Sure did help. For those wanting to know the specifics, I changed the type of the 'content' property in the hibernate mapping from 'binary' to 'org.springframework.orm.hibernate3.support.BlobByteArrayType', and then I set the 'lobHandler' property of the 'sessionfactory' bean to reference a 'org.springframework.jdbc.support.lob.OracleLobHandler' bean. I can now see that my blob of 360 bytes is loaded when the query is run standalone and within the OAS application. – John Q Citizen Aug 06 '13 at 05:20
  • Glad it helped. Please mark the answer as correct, so the question gets closed. Thank you! – Cristian Meneses Aug 06 '13 at 05:22
  • Of course, now my unit tests fail since the LobHandler is not properly initialized for the local database :-/ But that's another question ... – John Q Citizen Aug 06 '13 at 07:30
  • ... and another issue: when I try to write entities with blobs I get a spring error "java.lang.IllegalStateException: Active Spring transaction synchronization or active JTA transaction with specified [javax.transaction.TransactionManager] required". I'm not sure why transactions are necessary to write a blob. – John Q Citizen Aug 11 '13 at 22:48
  • Hi John. Indeed transactions are needed to write blobs (at least in oracle), since in the first step, you must create an empty blob and store it. And the second step consists in opening a streeam to update the data on the Blob, and finally commit the transaction. I've done this using plain JDBC and a transaction is needed for the whole process – Cristian Meneses Aug 12 '13 at 15:02