1

This is my query

select
    users.id as user_id,
    users.cat as cat,
    from_currencies.id as from_currency,
    to_currencies.id as to_currency,
    deals_public.change_from as change_from,
    deals_public.change_to as change_to,
    users.site as link,
    users.username as user_name,
    users.email as email,
    users.active as active,
    from_currencies.name as from_name,
    from_currencies.sign as from_sign,
    to_currencies.name as to_name,
    to_currencies.sign as to_sign
 from
     `deals_public`
 inner join `users` on `users`.`id` = `deals_public`.`user_id`
 inner join `currencies` as `from_currencies` on `from_currencies`.`id` = `deals_public`.`from_currency`
 inner join `currencies` as `to_currencies` on `to_currencies`.`id` = `deals_public`.`to_currency`
 where
    `users`.`active` = 1
     and from_currency like '34'
     and to_currency like '35'
 limit
     20

I have a column in deals_public table named register that holds created_at timestamp, and I want to get the latest row based on user_id. I mean I want to get only one record from deals_public for each user_id

EDIT:

I got this working by using:

group by user_id

but this query is taking too long(15 seconds)

how can I reduce this time??

2 Answers2

0

You just need to add a criteria, like

and not exists (select 1 from deals_public dp where dp.register > deals_public.register and deals_public.user_id = deals_public.user_id)

because that's just what you want: the nonexistence of a newer public deal for that user.

Lajos Arpad
  • 64,414
  • 37
  • 100
  • 175
  • this is working. but there is a problem my table has almost 1 million rows and it's getting bigger so this query taking forever(almost 3 minutes). –  Oct 03 '20 at 12:46
  • I could solve this issue by using `group by users.id` but it took too long –  Oct 03 '20 at 12:47
  • @AmirPirmoradian then you can ```select CONCATENATE('(', goup_concat(max(id)), ')') from deals_public group_by user_id;```. If you store the result into a variable and then generate the query using the result, then it should be much quicker. – Lajos Arpad Oct 03 '20 at 13:09
  • @AmirPirmoradian this is how you generate a query: https://stackoverflow.com/questions/23178816/mysql-dynamic-query-in-stored-procedure – Lajos Arpad Oct 03 '20 at 13:10
  • @AmirPirmoradian and you want to concat in such a manner that you will and up with user_id in (7, 10, 48) or the like. – Lajos Arpad Oct 03 '20 at 13:10
0

If you are running MySQL 8.0, you can use window functions for this.

I would phrase this as:

select
    u.id as user_id,
    u.cat as cat,
    fc.id as from_currency,
    tc.id as to_currency,
    dp.change_from as change_from,
    dp.change_to as change_to,
    u.site as link,
    u.username as user_name,
    u.email as email,
    u.active as active,
    fc.name as from_name,
    fc.sign as from_sign,
    tc.name as to_name,
    tc.sign as to_sign
from (
    select dp.*, row_number() over(partition by user_id order by register desc) rn 
    from deals_public
    where from_currency = 34 and to_currency = 35
) dp
inner join users u on u.id = dp.user_id
inner join currencies as fc on fc.id = dp.from_currency
inner join currencies as tc to_currencies on tc.id = dp.to_currency
where dp.rn = 1 and u.active = 1 
limit 20

Rationale:

  • the idea is to enumerate the rows of each user in deal_public with row_number(), then use that information to filter in the outer query

  • we can also pre-filter that table on from_currencies and to_currencies, which might improve the efficiency of the query; I changed the like conditions to equality conditions (in absence of wildcards within the right operand, both are equivalent) - and it looks like these are numbers, so compare them as such

  • table aliases make the query easier to write and read

GMB
  • 216,147
  • 25
  • 84
  • 135