1

Using postgres 10.4.

I have a table my_table1 with a column attr_a declared as jsonb

Within that field I have { my_attrs: {UpdatedDttm:'<date as iso8601 str>}}

for example:

{ my_attrs: {UpdatedDttm:'2018-09-20T17:55:52Z'}}

I use the date in a query that needs to do order by (and to compare with another date).

Because dates are strings, I have to convert them on the fly to a timestamp, using to_timestamp.

But I also need to add that order & filtering condition to an index. So that Postgres can leverage that.

Problem is, my index creation is failing with the message:

create index my_table1__attr_a_jb_UpdatedDttm__idx ON my_table1 using btree (((to_timestamp(attr_a->'my_attrs'->>'UpdatedDttm', 'YYYY-MM-DD"T"HH24:MI:SS.MS"Z"')) ) DESC);

ERROR: functions in index expression must be marked IMMUTABLE SQL state: 42P17

Is there a method to get this to work (hopefully, without creating a custom date formatting function), and I need to keep the time zone (cannot drop it).

I checked a similar question

And I am using what's recommended there to_timestamp and without the ::timestamptz typecast.

Still getting the error if I am using the built in function.

That answer suggests to create my own function (but I was trying to avoid that (as noted in my question at the bottom, because it requires rewritting/changing all the sql statemetns that use the table).

V P
  • 845
  • 10
  • 28
  • could not find a way to avoid creating a custom function, so ended up with a function that simply wraps to_timestamp (my scenario did not allow for a to_char as was suggested in duplicate question link) – V P Aug 21 '18 at 20:22

1 Answers1

0

You could create an INSERT/UPDATE trigger that moves that nested JSONB value out to a top-level column; then index that like normal. You'd then also use that top-level column in your queries, effectively ignoring the nested JSONB value.

Cody Caughlan
  • 32,456
  • 5
  • 63
  • 68
  • thx, unfortunately I actually have a non-trivial number of these dates. And index creation is needed on some, periodically (not all the time). Moving them out of JSON and duplicating their value in plane column, is somewhat impractical, due to the complexity managing the duplicate values. – V P Aug 21 '18 at 17:06