5

following is my sample table and rows

create table com (company text,val int);


insert into com values ('com1',1),('com1',2),('com1',3),('com1',4),('com1',5);

insert into com values ('com2',11),('com2',22),('com2',33),('com2',44),('com2',55);

insert into com values ('com3',111),('com3',222),('com3',333),('com3',444),('com3',555);

I want to get the top 3 value of each company, expected output is :

company  val
---------------
com1     5
com1     4
com1     3
com2     55
com2     44
com2     33
com3     555
com3     444
com3     333
zeoran
  • 105
  • 1
  • 2
  • 9
  • a solution is here : http://sqlfiddle.com/#!15/5692e/5. but the column is text type so partition by is not recommended. I am thinking something else. – Deep Dec 11 '14 at 05:18
  • Same question with answers here: http://stackoverflow.com/questions/1124603/grouped-limit-in-postgresql-show-the-first-n-rows-for-each-group – Kouber Saparev Dec 11 '14 at 11:37

3 Answers3

22

Try This:

SELECT company, val FROM 
(
    SELECT *, ROW_NUMBER() OVER (PARTITION BY 
             company order by val DESC) AS Row_ID FROM com
) AS A
WHERE Row_ID < 4 ORDER BY company

--Quick Demo Here...

Veera
  • 3,412
  • 2
  • 14
  • 27
  • 2
    you should use `order by val DESC` – Vivek S. Dec 11 '14 at 05:24
  • well, I had the same thing in my mind and I posted it as comments. but the thing is column is of text type so if the data string is very large and there are many of such, then it will create performance issue. – Deep Dec 11 '14 at 05:31
  • It fixed my problem with above answered since SQL SELECT will return duplicated rows. My logic I want return all rows but with any row duplicated value my business logic require limit top 2 to return (not all) , Thank – Meas Aug 19 '21 at 09:57
6

Since v9.3 you can do a lateral join

select distinct com_outer.company, com_top.val from com com_outer
join lateral (
    select * from com com_inner
    where com_inner.company = com_outer.company
    order by com_inner.val desc
    limit 3
) com_top on true
order by com_outer.company;

It might be faster but, of course, you should test performance specifically on your data and use case.

poshest
  • 4,157
  • 2
  • 26
  • 37
0

You can try arrays, which are available since Postgres v9.0.

WITH com_ordered AS (SELECT * FROM com ORDER BY company,val DESC)
SELECT company,unnest((array_agg(val))[0:3])
  FROM com_ordered GROUP BY company;
    
cynox
  • 461
  • 4
  • 6