0

UPDATE June 30 This question made a more clean benchmarking, and Mythz found an issue and resolved it: ServiceStack benchmark continued: why does persisting a simple (complex) to JSON slow down SELECTs?


ARE WRITE/READ SPEEDS REASONABLE?

Im my trials with OrmLite, I am going to test to convert all our current data/objects from our own implementation for saving to database, and switch over to OrmLite.

However, I did a simple benchmark/speedtest, where I compared our current serialization and write to db as well as read from db and deserialize.

What I found was that ServiceStack is much slower than how we currently do it (we currently just serialize the object using FastSerializer, and write the byte[] data to a BLOB field, so its fast to write and read, but of course obvious drawbacks).

The test I did was using the Customer class, that has a bunch of properties (used in our products, so its a class that is used every day in our current versions).

If I create 10 000 such objects, then measure how long it takes to persist those to a MySql database (serialization + write to db), the results are:


UPDATE

As the "current implementation" is cheating (its just BLOBing a byte[] to database), I implemented a simple RelationDbHandler that persists the 10 000 objects in the normal way, with a simple SQL query. Results are added below.


WRITE 10 000 objects:
Current implementation: 33 seconds
OrmLite (using .Save): 94 seconds
Relational approach: 24.7 seconds
READ 10 000 objects:
Current implementation: 1.5 seconds
OrmLite (using Select<>): 28 seconds Relational approach: 16 seconds

I am running it locally, on a SSD disk, with no other load on CPU or disk. I expected our current implementation to be faster, but not that much faster.

I read some benchmark-stuff on ServiceStack webpage (https://github.com/ServiceStack/ServiceStack/wiki/Real-world-performance), but most of the links area dead. Some plain that reading 25 000 rows takes 245 ms, but i have no idea what a row looks like.

Question 1: Are there any benchmarks I can read more about?
Question 2: The Customer object is specified below. Does mythz think the write/read times above is reasonable?

TEST CASE: This is the Customer objects as it looks in the database after OrmLite created the table. I only populate 5 properties, one is "complex" (so only one field has a JSON serialization represenation in the row), but since all fields are written, I dont think that matters much?

enter image description here

Code to save using OrmLite:

 public void MyTestMethod<T>(T coreObject) where T : CoreObject
    {
        long id = 0;
        using (var _db = _dbFactory.Open())
        {
            id = _db.Insert<T>(coreObject, selectIdentity: true);
        }           
    }

Code to read all from table:

 internal List<T> FetchAll<T>()
        {
            using (var _db = _dbFactory.Open())
            {
                List<T> list = _db.Select<T>();
                return list;
            }
        }
Ted
  • 19,727
  • 35
  • 96
  • 154
  • Just ran the code again; now, OrmLite took 28-36 seconds, same code, same database, same everything. – Ted Jun 26 '18 at 20:49
  • More investigations show that the more complex blobed JSON objects, the slower it is (naturally). But its quite much. Loading 10k objects takes 1500 ms; if I start removing some JSON-blobs, it drops significallt for each column/field I clear in the database. Even with very small JSON-data, it drops 200-300 ms for reach column I clear... – Ted Jun 27 '18 at 10:38

1 Answers1

0

Use Insert() for inserting rows. Save() will check if the existing record exist and update it if it does, it also populates Auto Increment primary keys, if any were created.

There's also InsertAsync() APIs available but Oracle's official MySql.Data NuGet package didn't have a proper async implementation in which case using https://github.com/mysql-net/MySqlConnector can yield better results by installing the ServiceStack.OrmLite.MySqlConnector NuGet package and using its MySqlConnectorDialect.Provider.

You'll also get better performance using .NET Core which will use the latest 8.x MySql.Data NuGet package.

Note: The results in https://github.com/tedekeroth/OrmLiteVsFastSerializer are not comparable, which is essentially comparing using MySql as a NoSQL blob storage vs a quasi relational model in OrmLite with multiple complex type blobbed fields.

In my tests I've also noticed several serializaation exceptions being swallowed which will negative impact performance, you can have Exceptions bubbled by configuring on Startup:

OrmLiteConfig.ThrowOnError = JsConfig.ThrowOnError = true;
mythz
  • 141,670
  • 29
  • 246
  • 390
  • Thanks for the quick answer (as always)! I tried using Insert instead, almost same results, it took 83.5 seconds... Is this in line with what you'd expect, in your opinion? – Ted Jun 26 '18 at 07:19
  • @Ted I’d need to see the benchmarks to be able to comment and determine if the benchmarks are equivalent. If you upload the benchmark source code to a repo on GitHub I’ll be able to analyse and explain the differences. – mythz Jun 26 '18 at 07:24
  • Alright, I will try to do that shortly, I'll comment here when I have done so. – Ted Jun 26 '18 at 07:27
  • Here it is. In the bin/debug, there are some necessary DLLs that you need to reference. There is an obj_customer.sql file to create the obj_customer table. I am running mySql 5.6. https://github.com/tedekeroth/OrmLiteVsFastSerializer – Ted Jun 26 '18 at 07:48
  • Debug? Benchmarks should never use debug builds, add Release builds and I’ll look into it tomorrow as it’s 4am atm – mythz Jun 26 '18 at 07:50
  • Alright, updated on GIT with release builds. I tested it again, same times (84 seconds). Thanks for looking into this, sleep well =) – Ted Jun 26 '18 at 08:12
  • Oh, I updated the post, just to clarify that the way we do it now, is actually a misuse of a relational database; we serialize the object using FastSerializer, and persist/read the byte[] from a blob field; so, its one field that has a binary representation of the object, that we write and read, thus, its very fast. – Ted Jun 26 '18 at 08:27
  • .. and added code to persist it using a simple, clean INSERT query, which is faster than all of them, but, we ignore the complex values that are serialized into JSON with ServiceStack and BLOBed in the "current"... – Ted Jun 26 '18 at 08:47
  • @Ted This is far from an equivalent benchmark, tables aren't the same structure, with more columns in OrmLite table, multiple blobbed complex types in OrmLite, more unused indexes on OrmLite tables slowing down inserts. There's also serialization exceptions being swallowed in some of the Complex Types. Set `OrmLiteConfig.ThrowOnError = JsConfig.ThrowOnError = true;` on Startup to have Exceptions bubbled. Source code for the classes aren't included so I can't tell what the cause of the serialization exception is. You're also comparing Ormlite access via ADO.NET vs MySql concrete types directly. – mythz Jun 26 '18 at 15:50
  • @Ted Anyway these results are far from comparable, you're essentially comparing MySql as a NoSQL blob storage which basically tests serialization performance of a single field vs a quasi RDBMS table with multiple blobbed complex types. – mythz Jun 26 '18 at 15:53
  • Well, I described that above, as you yourself discovered, that its not exactly comparable. But even if I skip the "alfa"-variant ("NoSQL"), and use the MySql RDBMS-approach ("Relational"), but without complex types, its 27 seconds. Even if we would double that time to account for complex types etc, we are still at around 50 seconds compared to 84 seconds. But that aside - the structure of the Customer class, 10 000 such objects, 84 seconds. Does that sound about right for you, being OrmLite expert? – Ted Jun 26 '18 at 18:58
  • So, in the "relational" option, I use the same tables that ORmLite created, but inserted it via a "normal Mysql query", using reflection, but skipping complex values. Even if its not 100% correct, its still way way off the 84 seconds, is what I was trying to say. =) – Ted Jun 26 '18 at 19:10
  • Also in 5.0.0.0 of OrmLite, the JsConfig.ThrowOnError does not work ("does not contain a definition for 'ThrowOnError'"). 'ThrowOnDeserializationError' exists though. – Ted Jun 26 '18 at 19:21
  • I ran it again with {OrmLiteConfig.ThrowOnError = JsConfig.ThrowOnDeserializationError = true;} but I got no errors bubbled anywhere, and now the execution time for INSERTs took around 30 seconds instead. I have no idea why... – Ted Jun 26 '18 at 19:26
  • @Ted The only results that are comparable are when saving the same model in the same table, otherwise you're benchmarking different workloads. The Customer table isn't normal, remove the complex types fields if you want to compare them, I couldn't because the models are in the compiled .dll and not source code form which I could modify, which also prevented from diagnosing the serialization error. If you upgrade to v5.1.1 the `JsConfig.ThrowOnError` will be available. – mythz Jun 26 '18 at 19:37
  • Alright, I can try that and I agree they are not completely comparable, but it is still a indication. However, when I run the exact same thing now, its down to about 30 seconds instead of 84, and I dont know why. – Ted Jun 26 '18 at 19:40
  • Aha, I use NuGet in VS =) – Ted Jun 27 '18 at 05:27
  • I have found "the issue" for being super-slow (when reading). When I add a ContactItem to the ContactItemList in the Customers ContactDetails, it becomes super slow - read takes 40 seconds for 20 000 rows. `c.ContactDetails.ContactItemList.Add(new AlfaOnline(RandomString(10), RandomString(5), RandomString(6)));` If I dont add that ContactItem (`AlfaOnline`), then the read is quick, 900 ms. – Ted Jun 27 '18 at 10:25
  • Hello again, I upgraded to 5.1.1 and ran it again, with the JsConfig.ThrowOnErrors set to true, but I get no exceptions regarding serialization in log or elsewhere? – Ted Jun 27 '18 at 14:09
  • @Ted I was getting them with the old version, if you're not getting them anymore maybe something's changed. The project has broken configuration and is tedious for me to fix with every new release so I'm not trying anymore. – mythz Jun 27 '18 at 14:24
  • Alright, I havent changed anything and I dont know what you mean with broken configs, but thx for trying. My conclusions are still that having blobbed complex objects in those JSON-formats are slowing SELECTs down enormously =( – Ted Jun 28 '18 at 05:58