1

I have a program that use old Linq To SQL to connect an ASP.NET application to a SQL Server DB. ASP.NET application and SQL Server instance are on the same machine, and both "environment" are upadated (IIS 10, NET Framework 4.8 and SQL Server 2019).

In the software i have to handle a virtual Cart with the customer order. Cart has many field, one of them is a nvarchar and contains the "cart document" a stirng that tipically is few KB, but sometime may reach few MB (never more than 10MB)

When i udpate a document string in the reange of 2-3MB, and then update the single row that contains it, the udpate operation is really, really slow (2-2,5s). Here update code :

    protected void Upsert(CartDto cart, bool isValidationUpsert = false )
    {
        lock (_sync)
        {
            if ((cart?.Id ?? 0) <= 0)
                throw new ExtendedArgumentException("cartId");

            using (var dbContext = ServiceLocator.ConnectionProvider.Instace<CartDataContext>())
            {
                var repository = new CartRepository(dbContext);
                
                var existingCart = repository.Read(crt => crt.ID == cart.Id).FirstOrDefault();
                if (existingCart == null)
                {
                    existingCart = new tbl_set_Cart();
                    existingCart.Feed(cart);

                    repository.Create(existingCart);
                }
                else
                {
                    existingCart.Feed(cart);
                    repository.Update(existingCart);
                }

                dbContext.SubmitChanges(); //<<--- This speecifi operation will take 2-2,5s previous instructions take a neglectable time
            }
        }
    }

I have no idea about the why, nor how to improve performance in this scenario

--EDITED : as suggested, i have profiled the oepration on the DB and experienced the same delay (~2,5) event if i run the SQL code directly onto SQL Server (using SSMS to connect and execute code).

Here SQL code and perforamance statistics :

DECLARE @p0 AS INT = [cart_id];
DECLARE @p1 AS INT = [entry_count];
DECLARE @p2 AS NVARCHAR(MAX) = '..document..';

UPDATE [dbo].[tbl_set_Cart]
SET [ITEMS_COUNT] = @p1, [ITEMS] = @p2
WHERE [ID] = @p0

enter image description here

Here my table schema, as you can see nothing it's very simple :

/****** Object:  Table [dbo].[tbl_set_Cart]    Script Date: 02/12/2021 15:44:07 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [dbo].[tbl_set_Cart](
    [ID] [int] NOT NULL,
    [AS400_CUSTOMER_COD] [nvarchar](50) NOT NULL,
    [AS400_LISTIN] [int] NOT NULL,
    [VALUE] [nvarchar](max) NOT NULL,
    [DELIVERY_COSTS] [nvarchar](max) NOT NULL,
    [ITEMS_COUNT] [int] NOT NULL,
    [ITEMS] [nvarchar](max) NOT NULL,
    [KIND] [int] NOT NULL,
    [CHECKOUT_INFO] [nvarchar](max) NOT NULL,
    [ISSUES] [nvarchar](max) NOT NULL,
    [LAST_CHECK] [datetime] NOT NULL,
    [USER_ID] [int] NOT NULL,
    [IMPERSONATED_USER_ID] [int] NOT NULL,
    [OVERRIDE_PRICES] [bit] NOT NULL,
    [HAS_ISSUE] [bit] NOT NULL,
    [IS_CONFIRMED] [bit] NOT NULL,
    [IS_COLLECTED] [bit] NOT NULL,
    [_METADATA] [nvarchar](max) NOT NULL,
 CONSTRAINT [PK_tbl_set_Cart] PRIMARY KEY CLUSTERED 
(
    [ID] ASC
)WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, OPTIMIZE_FOR_SEQUENTIAL_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
Skary
  • 1,322
  • 1
  • 13
  • 40
  • With nothing other than some code there is no chance anybody can help you here. But what in the world is this "cart document" thing? A cart should be nothing more than two tables. CartHeader and CartItems. I am not suprised it may take a couple seconds if you are pushing 10MB strings around through LINQ. Sounds like an architecture issue to me. – Sean Lange Dec 02 '21 at 13:41
  • @SeanLange : how can i push few MB of data through LinqToSql fastly? Because it surprise me that it's slow to do so. I know it's slow on multiple record insert (execute a row at once) but why transfer few MB will be slow on a decent server with SSD disk where no network is involved? – Skary Dec 02 '21 at 13:45
  • 3
    Benchmark it against eg a straight INSERT run in a SqlCommand, just so you know how much is L2S's fault – Caius Jard Dec 02 '21 at 13:48
  • There is a network involved unless your application is running from the same physical box as your sql server and nobody is connecting to this application remotely. I still say a cart that is 10mb sounds like a design issue unless there are tens of thousands of unique items being purchased. – Sean Lange Dec 02 '21 at 13:50
  • @CaiusJard: good idea, just tested the linq to sql generated code (a simple update with the text) directly in SSMS. The time taken in 2,5s as i have experienced in the application. So seems that this is the time SQL Server will take to update a big string? how is it possible? – Skary Dec 02 '21 at 14:00
  • @SeanLange sorry may be a was misleading. I am working on the same machine (browser, web server and DB engine) so no network involved (if we do not consider the loop back request). An yes i am the only one connected and working on development environment, so no one else is doing anything on the system. – Skary Dec 02 '21 at 14:02
  • Ahh gotcha. So you are developing on your local machine. A couple things to check then. First follow the suggestion from Caius Jard about benchmarking. Also, setup some extended events or run a sql trace to see what sql is actually being executed. L2S can generate some truly horrific queries. – Sean Lange Dec 02 '21 at 14:05
  • @CaiusJard i have added benchmark data – Skary Dec 02 '21 at 14:13
  • Have you tried inspecting the database schema for any potential performance killers on the table like triggers, computed columns or foreign key constraints? – Jonas Høgh Dec 02 '21 at 14:32
  • @JonasHøgh of course, anyway i have integrated my question adding the current table schema. No trigger and nothing in particular, that table will contains few hundred rows. As a cart is completed it will be moved into a cold storage table with all historical data. So this table is very very simple. – Skary Dec 02 '21 at 14:46
  • How fast is the insertion of a document of similar size? – lptr Dec 02 '21 at 15:11
  • Once upon a time I had to do some analytics on cryptocurrency transactions, and found the easiest way to do it was loading the json output of the bitcoin wallet into SQLS; similar route as you - around 150Mb of JSON in a variable, that was OPENJSON'd and inserted into a table LEFT JOIN another table (only where the join was null, so only insert new records).. I was always fairly amazed that it took only 30-40 seconds for around a half million rows. Perhaps I shouldn't have been! – Caius Jard Dec 02 '21 at 15:14
  • @lptr I think that *is* the speed of inserting a document of similar size? (2 seconds for 8mb) – Caius Jard Dec 02 '21 at 15:16
  • @Iptr inserting is a bit faster but does not change too much – Skary Dec 02 '21 at 15:31
  • You have a lot of columns in the SQL table that are of type nvarchar(max) are you able to set these to some realistic limits for each column and see if that improves query performance? – PeterG Dec 02 '21 at 15:46
  • @PeterG : i cleared those fields and set them to small value. Then tested again the update. And experienced no noticeable changes – Skary Dec 02 '21 at 16:01
  • @Skary Did you just clear the values from those columns or did adjust the data type of the column to store smaller values (e.g. nvarchar(100) instead of nvarchar(max))? There is a performance hit/cost to using nvarchar(max) over nvarchar(100) or even nvarchar(8000) - See this Q&A for more info: https://stackoverflow.com/a/43946000/10070561 – PeterG Dec 02 '21 at 17:55

1 Answers1

0

After investigating deeper the DB profiling with the help of DBA Stack Overflow users (here the discussion https://dba.stackexchange.com/questions/303400/sql-server-how-to-upload-big-json-into-column-performance-issue/303409#303409) turn out to be an issue probably related to disk.

Because some production system hit the same problem as my development machine i ask how to improve performance and recevied the beautiful tips of store the compressed version of my data. Data are not to big (in my scanrio) to be too slow for an in-memory at runtime compression/decompression, and that drammatically reduce the time (LZMA used). From 2,5s to ~0,3 a really good improvement.

Thanks to all for precious help and tips.

Skary
  • 1,322
  • 1
  • 13
  • 40