0

i tried lots of thing but not of them worked hope someone may help me with this query

let me show my query first then issue

select log.*,client.client_name
from ( select * from sessions  
        where   ( `report_error_status` like  CONCAT('%' ,'consec', '%')   
                    or `ipaddress` like  CONCAT('%' ,'consec', '%')  or `last_updated` like  CONCAT('%' ,'consec', '%')  ) 
                    ORDER BY `id`  DESC  LIMIT 10 OFFSET 0   )
log
inner join 
            (select * from clients  
                where  ( `client_name` like  CONCAT('%' ,'consec', '%')  ) ) 
client on log.client_id = client.id 

in order to prevent exponential reducing query speed i'm applying limit in my table session above query working perfectly fine without "where", but my problem lies over here if user from front end try to search any thing in datatable , where clause is dynamically get attached in backend (above query with where) now my problem is that suppose table (session) does not contain user search value consec ,but table (client) contain then final query still return null value now is there any way to apply conditional where like below query

   ifnull((select id from sessions  where 
    (`report_error_status` like  CONCAT('%' ,'consec', '%')   
               or `ipaddress` like  CONCAT('%' ,'consec', '%')  
               or `last_updated` like  CONCAT('%' ,'consec', '%'))
               ),
             (select * from sessions  ORDER BY `id`  DESC  LIMIT 10 OFFSET 0) ))

it will resolve all my problem is there any way to achieve in mysql. if table session contain 100 000 data it will search with client table one by one against 100k records. suppose time taken to execute is 1 sec now what if my session table has 200k data again time will increase exponentially in inner join, to avoid this i'm using subquery in session with limit

Note report_error_status,ipaddress, client_name etc in index

James Z
  • 12,209
  • 10
  • 24
  • 44
bipin
  • 421
  • 8
  • 21
  • 1
    Not sure what 'consec' stands for, but your best bet to reduce access times is to get rid of the leading '%' in each LIKE-Comparison, and apply indices. – Zsolt Szilagyi Oct 13 '20 at 14:43
  • 1
    `LIKE` uses a regular expression, meaning, it searches the whole field for the value you're looking for. **This bypasses all MySQL indices**, which is how you would optimize most things. You should think of a new way of storing this data. – HoldOffHunger Oct 13 '20 at 14:43
  • 1
    `like CONCAT('%' ,'consec', '%')` -- I know it's not your question, but, what was wrong with `like "%consec%"`? – HoldOffHunger Oct 13 '20 at 14:45
  • @HoldOffHunger thnks for you reply, reason for using concat is beacuse i have dynamic query as well as some secuirty reason – bipin Oct 13 '20 at 15:02
  • @HoldOffHunger i know like will scan full table that what i want :D – bipin Oct 13 '20 at 15:10
  • You need to do this without using `LIKE %something%`. Maybe store a `TINYINT(1) ConsecYes`, and then `SELECT WHERE ConsecYes = 1`. Would that work? You cannot scan the whole table without it being slow, because that means scanning every bit. – HoldOffHunger Oct 13 '20 at 15:10
  • 1
    "...in order to prevent exponential reducing query speed i m applying limit..." -- No, that doesn't help. This only reduces the returned rows, but the engine still needs to read the whole database to find these 10 rows. – The Impaler Oct 13 '20 at 15:51
  • @TheImpaler so how i can achieve above scenario – bipin Oct 13 '20 at 16:01
  • Traditional SQL mechanichs won't be able to improve the performance of this query. The other [better] option is to implement "full text search". That can dramatically improve performance in your case. – The Impaler Oct 13 '20 at 18:50

1 Answers1

0

There is no way to optimize a MySQL SELECT statement that use a regex opening with the wildcard. Your REGEX is %consec%, and you could add an index, but to quote the official MySQL documentation...

The index also can be used for LIKE comparisons if the argument to LIKE is a constant string that does not start with a wildcard character. For example, the following SELECT statements use indexes:

SELECT * FROM tbl_name WHERE key_col LIKE 'Patrick%';

SELECT * FROM tbl_name WHERE key_col LIKE 'Pat%_ck%';

Source: Dev.MySQL.com: Comparison of B-Tree and Hash Indexes; B-Tree Index Characteristics

Your query falls outside of this use case, so indices will not help. Here's another answer suggesting the same.

I am going to suggest Database Normalization...

You're selecting fields that are LIKE %consec%. Why? What is this value? Is it a special, internal code that means something special for your software and your software alone? After all, look the names of the fields -- report_error_status, ipaddress, last_updated. Except for maybe the error code one, there's no reason "consec" would appear in these, unless it had some internal significance.

For instance, table.field has value of "userconsec", sometimes you want to search for "user", other times "consec".

In that case, you'd want a new table; "tableType", with tableType.tableid pointing to the other table and tableType.Type being the Type value ("user", "consec", etc.), an index on both tableid and Type, and then you can drop from your query WHERE LIKE ... and add instead JOIN ON tableType.tableid = table.id AND tableType.Type = "consec";.

It will be faster because...

  • It is not looking through all the text of several text fields.
  • It is looking through an ordered list of integers to identify the record you need.
HoldOffHunger
  • 18,769
  • 10
  • 104
  • 133
  • i m agree with you post and it really good. but my problem is not resolve, suppose session table contain 1 lakhs data then it will scan 1 lakhs record one by one with client table and it will exponential i.e query speed goes slow ,"consec" is dummy client name which contain in table client not in session table(it just contain id of client table), now my problem rise over here if data found not found in session table but present in client table ... that mean my query return null always – bipin Oct 13 '20 at 15:37
  • 1
    Hey, bipin, thanks for checking my post! Hrm. For example: table.field has value of "userconsec", sometimes you want to search for "user", other times "consec". In that case, you'd want a new table; "tableType", with tableType.tableid pointing to the other table, an index on tableid, and you can drop from your query `WHERE LIKE`, and add instead `JOIN ON ...tableid = table.id .... AND ...Type = "consec"` etc. Sorry for the lack of details in this limited space, let me know what you think! I will update post if helpful. – HoldOffHunger Oct 13 '20 at 15:42
  • Hey, bipin, just updated my answer with the info in my comment, if it helps, let me know, and feel free to accept it, so it can help others with the same problem. – HoldOffHunger Oct 13 '20 at 16:11