2

I'm having a great time with Dapper and having no real issues up to this one, and it's driving me mad.

Given this call to an Oracle method inside a package

begin
  package.method(in_table => :in_table,
                 in_clob_type => :in_clob_type,
                 out_error_table => :out_error_table);
end;
  • From within PL/SQL developer or any other Oracle tool it takes approx. 2 seconds to run.
  • From within a standard C# console test app it takes approx. 2-3 seconds.
  • From within an IIS hosted WebAPI app it takes approx. 10-12 seconds.

Same SQL, same parameters, same database and same user. Every other piece of SQL within the application works perfectly

var errorTable = string.Empty;
var parameters = new DynamicParameters();

parameters.Add("in_table", "table-name");
parameters.Add("in_clob_type", 0);
parameters.Add("out_error_table", dbType: DbType.String, size: 32, direction: ParameterDirection.Output);

db.Query("package.nethod", parameters, commandType: CommandType.StoredProcedure);

// Query or Execute makes no difference
// db.Execute"package.nethod", parameters, commandType: CommandType.StoredProcedure);

errorTable = parameters.Get<string>("out_error_table");

Any one have any ideas on the best way to debug this?

UPDATE 1:

Both WebAPI and the console code produce 1708 distinct SQL statements for insertion and update processes within the package function. It just takes a longer between SQL calls but I can't see a pattern as yet.

UPDATE 2:

Digging deeper, not my code so it's taking a little longer, found a call that creates some temp tables into which we load the data required for the process. If I comment this out and just provide an existing table name, 2-3 seconds.

Something in the creation of the tables seems to be blocking the rest of the process? If I mark all the methods PRAGMA AUTONOMOUS_TRANSACTION 10-12 seconds. If I create tables in or out of a specific or shared transaction, 10-12 seconds. If I create them without a transaction 10-12 seconds.

PoweredByPorkers
  • 243
  • 5
  • 15
  • Did you time the execution of just the Dapper code, or the whole WebAPI call, when reporting 10-12 seconds? – Eric J. May 17 '16 at 00:46
  • With a stopwatch object directly prior to and following the actual db.Query or Execute calls. – PoweredByPorkers May 17 '16 at 00:48
  • The console application you also were using dapper? How complex is the query? – Dave Greilach May 17 '16 at 00:50
  • That code examples is, bar changing the package names, exactly the code that runs in both the webapi and console applications. The stored proc itself is fairly complex and has many things to do internally. – PoweredByPorkers May 17 '16 at 00:52
  • 2
    I actually had a very similar issue but not with packages or dapper. Running a query from toad would be around a second, but from the web application it was easily 20. I noticed that when we changed the query slightly it would run fast once in the web application then go back to running slow. It turned out that the query optimizer was making it run slower. inserted: /*+ OPT_PARAM('_OPTIMIZER_USE_FEEDBACK' 'FALSE') */ in the query and it was all of a sudden consistent from the web app and toad. Might not be your issue, but maybe give it a try. – Dave Greilach May 17 '16 at 00:58
  • 1
    Are you able to setup some traces on your database, to confirm that the SQL being generated is identical in both applications? [See here](http://stackoverflow.com/questions/148648/oracle-is-there-a-tool-to-trace-queries-like-profiler-for-sql-server) for some solutions to set that up. – Rob May 17 '16 at 01:04
  • Not sure I can use a hint directly in a package call David, and the problem is when I debug with TOAD or PL/SQL Developer it's perfect so I have nowhere to attempt optimization :-( – PoweredByPorkers May 17 '16 at 01:04
  • Thanks Rob, just running through that now, will get back – PoweredByPorkers May 17 '16 at 01:13
  • Good luck man. Sounds frustrating – Dave Greilach May 17 '16 at 01:14

1 Answers1

0

I couldn't work out why the table creation would be causing any sort of lag within the process, in fact I don't see any possible way it could given the very simple nature of the tables, so I went a different direction.

Instead of creating temp tables in the users schema I created the tables as global temporary tables marked as ON COMMIT DELETE ROWS.

Now everything is running in 2 seconds or less as expected :-)

Thanks for all your help, and if you find any ideas for the lag then please feel free to share.

PoweredByPorkers
  • 243
  • 5
  • 15