You also have an option of using reversed LIKE
instead of IN
. Afterall, IN
can get slower on Interbase/Firebird, as the number of items grow. LIKE
would always do natural scan of the whole table. Which is much slower if the number of items is small, but will not slow down as the number of item grows.
Make your own profiling. You can even switch to one or another strategy, based on parameter string length. You was puzzled with 32KB Firebird VarChar
length limit, maybe it can matter for your application.
So, for general orientation see my answer at https://stackoverflow.com/a/43997801/976391
To apply "Route #2" to your case would be something like this...
ALTER PROCEDURE SP_STOCK_ANALYSIS
(
MAIN_GROUP varCHAR(32760) character set ascii
)
RETURNS
(
STOCK_CODE varCHAR(21),
STOCK_GROUP varCHAR(6),
DESCRIPTION varCHAR(31),
EXPENSE NUMERIC(15, 4)
)
AS
BEGIN
FOR
SELECT
L.STOCK_CODE, INV.DESCRIPTION, INV.STOCK_GROUP, L.BALANCE
FROM
LEDGER L LEFT JOIN INVENTORY INV ON L.STOCK_CODE = INV.STOCK_CODE
WHERE
(:MAIN_GROUP CONTAINING '~' || INV.STOCK_GROUP || '~')
-- INV.STOCK_GROUP in (:MAIN_GROUP)
AND (L.LEDGER_ACCOUNT in ('71212', '71211' ,'83791'))
INTO
STOCK_CODE, STOCK_GROUP, DESCRIPTION, EXPENSE
DO
.....
Then you call like that:
SELECT * FROM SP_STOCK_ANALYSIS ('~1~4~8~11~')
You can convert a query returning ID's into string using LIST
aggregating function, like
SELECT '~' || LIST (ID, '~') || '~' FROM source-table WHERE ........
But i think engineering wise it would be better to use transaction-local GTT
(global temporary table) instead of double conversion, then do natural join
inside your SP.
insert into SP-helping-GTT
SELECT ID FROM source-table WHERE ........;
...and then execute parameter-less SP, and then just COMMIT
to cleanse the GTT
The drawbacks would be implicit strict coupling and namespace pollution, of course.
But since you do multiple filtering on TWO tables at once - L
and INV
- and you might want to convert both list into parameters, then joining
two tables (GTTs) would be natural for relational database engine, and two nested natural scans would get poor O(n^2) scaling.
Additionally, you would really think if you need exotic CHAR
datatype instead of VARCHAR
. It truly is a minor nuissance, but people somehow get tripped over it again and again.
...and those are only a few.