3

I'm comparing Sql Server and Oracle performance of reads using C#. Basically, I'm using ADO.NET and ODP.NET and obtaining similar values on both, except when I read BLOB columns on distributed machines (one machine requesting another with DB).

Important notes:

  • The results are even for Sql and Oracle when testing locally.
  • Latency between machines are '<1ms'

In my test, all the data are exactly the same, and I'm reading 2881 rows with some binary data on each row. The obtained results are:

  • Sql Server locally: 1.6s
  • Oracle locally: 1.8s (good result)
  • Sql Server distributed: 1.8s
  • Oracle distributed: 10.0s (problem here!)

Configurations that I'm already using on Oracle to perform better:

  1. OracleCommand.InitialLOBFetchSize = -1; (tried other positive values but with worst results)
  2. OracleCommand.InitialLONGFetchSize = -1; (tried other positive values but with worst results)
  3. OracleReader.FetchSize = OracleReader.RowSize * 10000; (setting before the first Read())

So the question is:

  • There is any other configuration I'm missing to obtain better performance when reading BLOB columns?

Other notes:

  • I assumed InitialLOBFetchSize = -1 would improve this scenario considering low latency, but didn't improved too much (with InitialLOBFetchSize = 0 time was ~12.8s).
  • Already tried some network improvement configurations on Oracle instance, but without expressive results
  • Using Oracle.DataAccess.dll x64 v4.121.2.0
  • Tried using managed and unmanaged Oracle drivers, with better results with the unmanaged ones (the one I was already using since the beginning)

References:

Iúri dos Anjos
  • 371
  • 4
  • 20

0 Answers0