0

I am trying to unpivot a large dataset, with 250 columns. There is a very good documented solution here unpivot and PostgreSQL.

However, it inputs the column names manually. I'm looking to do something like..

  • extract all column names into an array
  • pass the array through unnest

OR,

  • extract all column names into an array
  • loop the array by indexing through
  • using column name values as an input in the unnest

Apologies for being noob, New to SQL!

This dataset is good enough for purposes:

CREATE TEMP TABLE foo (id int, a text, b text, c text);
INSERT INTO foo VALUES (1, 'ant', 'cat', 'chimp'), (2, 'grape', 'mint', 'basil');

SELECT id,
       unnest(array['a', 'b', 'c']) AS colname,
       unnest(array[a, b, c]) AS thing

--     I would like something like.. unnest(array[column_names]) AS thing
--     where column_names = [a,b,c.. so on]

FROM foo
ORDER BY id;

Expected outcome:

id  | colname | thing
1   | a       | ant
1   | b       | cat
1   | c       | chimp
2   | a       | grape
2   | b       | mint
2   | c       | basil
klin
  • 112,967
  • 15
  • 204
  • 232
  • I'd avoid EAV model for performance reasons. This could mean that to read `x` attributes of one entity (from a total of `n` entities) you can need `O(x*log(x*n))` random reads from the disk, if it happens to not be cached. A jsonb column with all your attributes can be much faster, as all attributes would be saved in the same disk block, so it would need only `O(log(n))` reads in the worst case. – Tometzky Dec 10 '17 at 10:58

1 Answers1

0

Use JSONB functions, example:

select id, key as colname, value as thing
from foo t
cross join jsonb_each_text(to_jsonb(t)- 'id')

 id | colname | thing 
----+---------+-------
  1 | a       | ant
  1 | b       | cat
  1 | c       | chimp
  2 | a       | grape
  2 | b       | mint
  2 | c       | basil
(6 rows)    
klin
  • 112,967
  • 15
  • 204
  • 232