3

Apparently there is row size limit (in bytes) for B-tree index in Postgresq, which causes index creation fail if string is too big.

How to create partial index which would overcome this ugly limitation? Generally, I have two ideas: A) to store truncated string inside the partial index B) to store only those rows in which the string satisfies the size limit

What is your advice or an example, how to create such partial indexes?

zlatko
  • 596
  • 1
  • 6
  • 23
  • C) compute a hash value (via a trigger function) into another column and put an index on it. – wildplasser Sep 10 '17 at 10:18
  • 4
    `create index on the_table(left(the_column(1,100))` –  Sep 10 '17 at 11:39
  • You would have to define your use case. Right now you are asking for a solution to an unknown problem. – Erwin Brandstetter Sep 10 '17 at 15:01
  • +1 for @a_horse_with_no_name, and I wonder why you try to index a huge TEXT... Normally the index should be used to search for short keys... – bobflux Sep 10 '17 at 15:03
  • The program is used to import tables from csv files and perform some SQL dynamically created and I don't know in advance which tables and columns will be used for joins or filters... Thus all should be indexed... – zlatko Sep 10 '17 at 15:12
  • A [full text index](https://www.postgresql.org/docs/current/static/textsearch.html) might be the better choice then –  Sep 10 '17 at 16:22
  • What is the actual btree row size limit in postgres 9.6? Is it fixed or depends on some parameters? Could this size constraint be specified in bytes in the partial index, instead of len ? – zlatko Sep 10 '17 at 18:15
  • 2
    About the max. row size in btree indexes: https://dba.stackexchange.com/questions/25138/index-max-row-size-error or https://dba.stackexchange.com/questions/69161/character-varying-index-overhead-length-limit/69164. The best solution depends on your undisclosed use case. – Erwin Brandstetter Sep 11 '17 at 02:05

0 Answers0