0

If I execute the following SQL

SELECT a.tran_date, a.tran_type, a.tran_number, a.amount_cust,
    SUM(a.amount_cust) OVER (ORDER BY a.tran_date, a.tran_type, a.tran_row_id),
    a.tran_row_id
FROM ar_trans a

I get this error -

ORDER BY list of RANGE window frame has total size of 8010 bytes. Largest size supported is 900 bytes.

The command looks innocuous, but in fact ar_trans is a VIEW. Here is the view definition -

CREATE VIEW ar_trans AS
    SELECT 'ar_inv' AS tran_type, a.row_id AS tran_row_id, a.tran_number AS tran_number,
        a.cust_row_id AS cust_row_id, a.tran_date AS tran_date, a.text AS text,
        a.inv_net_cust + a.inv_tax_cust AS amount_cust, a.inv_net_local + a.inv_tax_local AS amount_local
        FROM ar_tran_inv a WHERE  a.posted = '1'
UNION ALL
    SELECT 'ar_crn' AS tran_type, a.row_id AS tran_row_id, a.tran_number AS tran_number,
        a.cust_row_id AS cust_row_id, a.tran_date AS tran_date, a.text AS text,
        0 - (a.crn_net_cust + a.crn_tax_cust) AS amount_cust,
        0 - (a.crn_net_local + a.crn_tax_local) AS amount_local
        FROM ar_tran_crn a WHERE  a.posted = '1'
UNION ALL
    SELECT a.tran_type AS tran_type, a.row_id AS tran_row_id, a.tran_number AS tran_number,
        a.cust_row_id AS cust_row_id,
        CASE
            WHEN a.tran_type = 'ar_rec' THEN y.tran_date
            WHEN a.tran_type = 'cb_rec' THEN w.tran_date
            END AS tran_date,
        CASE
            WHEN a.tran_type = 'ar_rec' THEN y.text
            WHEN a.tran_type = 'cb_rec' THEN w.text
            END AS text,
        0 - (ROUND(a.arec_amount /
            CASE
                WHEN a.tran_type = 'ar_rec' THEN y.local_exch_rate
                WHEN a.tran_type = 'cb_rec' THEN w.local_exch_rate
                END
            * a.cust_exch_rate, 2)) AS amount_cust,
        0 - (ROUND(a.arec_amount /
            CASE
                WHEN a.tran_type = 'ar_rec' THEN y.local_exch_rate
                WHEN a.tran_type = 'cb_rec' THEN w.local_exch_rate
                END
            , 2)) AS amount_local
        FROM ar_rec_subtran a
            LEFT JOIN ar_tran_rec_det z ON z.row_id = a.tran_det_row_id
            LEFT JOIN ar_tran_rec y ON y.row_id = z.tran_row_id
            LEFT JOIN cb_tran_rec_det x ON x.row_id = a.tran_det_row_id
            LEFT JOIN cb_tran_rec w ON w.row_id = x.tran_row_id
        WHERE
            CASE
                WHEN a.tran_type = 'ar_rec' THEN y.posted
                WHEN a.tran_type = 'cb_rec' THEN w.posted
                END = '1'
UNION ALL
    SELECT 'ar_disc' AS tran_type, a.row_id AS tran_row_id, a.tran_number AS tran_number,
        a.cust_row_id AS cust_row_id, a.tran_date AS tran_date, a.text AS text,
        0 - (a.disc_net_cust + a.disc_tax_cust) AS amount_cust,
        0 - (a.disc_net_local + a.disc_tax_local) AS amount_local
        FROM ar_tran_disc a WHERE  a.posted = '1'

If there is no other solution, I think I can eliminate the JOINs in the view definition by splitting the underlying table into two, but I would prefer to avoid that if possible.

Frank Millman
  • 653
  • 1
  • 7
  • 13
  • [Bad habits to kick : using table aliases like (a, b, c) or (t1, t2, t3)](https://sqlblog.org/2009/10/08/bad-habits-to-kick-using-table-aliases-like-a-b-c-or-t1-t2-t3) – Thom A Feb 16 '19 at 12:51
  • 1
    Assuming the underlying columns don't actually exceed the size (which is impossible to tell just from this, and do check that), try `CAST`ing the columns in the view to appropriate types. Implicit conversion can do strange things. – Jeroen Mostert Feb 16 '19 at 12:56
  • Why use a window function at all? This looks like a simple group by would accomplish the same thing...or is there more to the story? – Clay Feb 16 '19 at 13:36
  • @Jeroen Thanks. That was the problem. Sql-server does not have a data-type of 'text with unlimited length' so I have been using NVARCHAR(3999). I managed to reduce it to 550, and now the error message shows a frame size of 1112 instead of 8010, so at least I know what is causing the error. I will have to persevere to fix it completely, but you have put me on the right track. – Frank Millman Feb 16 '19 at 14:11
  • @Clay There is more to it, but I reduced it to the smallest example that would show the error. – Frank Millman Feb 16 '19 at 14:13
  • BTW, I know that there is NVARCHAR(MAX), but you cannot use this in an index field. I will try to work around that restriction, and see if that solves my problem. – Frank Millman Feb 16 '19 at 14:29
  • The combination of needing both an `NVARCHAR(MAX)` *and* indexing it is unusual -- indexing fields that could actually be that large would be a highly inefficient proposition even if the engine allowed it. This suggests there's something not quite right with your data model. Consider indexing (for example) a truncated part of the field, or a hash of the pertinent part of the data. (You can also index a hash of the full field, but if the field is really large, that's where the inefficiency comes in that makes SQL Server unwilling to do it.) – Jeroen Mostert Feb 16 '19 at 14:33
  • Note also that declaring a column as `NVARCHAR(3999)` if it will never contain that much data "just to be sure" is a very bad idea -- the optimizer will estimate memory grants based on column sizes (think `max row size * estimated rows`), so if the length of the actual and potential contents is wildly different this will lead to very inefficient query plans and tempdb spills. (SQL Server 2017 mitigates this somewhat by having adaptive memory grants, but this does not completely eliminate all potential problems.) Size your columns to your actual data! – Jeroen Mostert Feb 16 '19 at 14:36
  • Ok, I seem to have cracked it. Previously I was not distinguishing between index columns and others. Now I can distinguish, so I make index columns NVARCHAR(50) and the others NVARCHAR(MAX). Everything still works, and I can now run the above query successfully. Thanks – Frank Millman Feb 16 '19 at 15:00
  • One further comment. I ran the timing tests shown in https://stackoverflow.com/questions/148398/are-there-any-disadvantages-to-always-using-nvarcharmax. It appears that NVARCHAR(MAX) is noticeably slower than NVARCHAR(4000) so I reverted to the latter. – Frank Millman Feb 17 '19 at 05:19

2 Answers2

0

Hopefully this answer will help others. I had this error recently and it because I had to many things in the ORDER BY portion of my SQL query. I cut it down to just one field and the query worked fine.

Reddspark
  • 6,934
  • 9
  • 47
  • 64
0

You can use Partition by like

select SecId, Debit,Credit,sum(Debit - Credit) over (partition by currid order by secid)  as Balance from ...

https://codingsight.com/calculating-running-total-with-over-clause-and-partition-by-clause-in-sql-server/

Henry Ecker
  • 34,399
  • 18
  • 41
  • 57
Mohmd Nader
  • 127
  • 1
  • 5