4

I have a user table with IDs and usernames (and other details) and several other tables referring to this table with various column names (CONSTRAINT some_name FOREIGN KEY (columnname) REFERENCES "user" (userid)). What I need to do is add the usernames to the referring tables (in preparation for dropping the whole user table). This is of course easily accomplished with a single ALTER TABLE and UPDATE, and keeping these up-to-date with triggers is also (fairly) easy. But it's the trigger function that is causing me some annoyance. I could have used individual functions for each table, but this seemed redundant, so I created one common function for this purpose:

CREATE OR REPLACE FUNCTION public.add_username() RETURNS trigger AS
$BODY$
  DECLARE
    sourcefield text;
    targetfield text;
    username text;
    existing text;
  BEGIN
    IF (TG_NARGS != 2) THEN
      RAISE EXCEPTION 'Need source field and target field parameters';
    END IF;
    sourcefield = TG_ARGV[0];
    targetfield = TG_ARGV[1];
    EXECUTE 'SELECT username FROM "user" WHERE userid = ($1).' || sourcefield INTO username USING NEW;
    EXECUTE format('SELECT ($1).%I', targetfield) INTO existing USING NEW;
    IF ((TG_OP = 'INSERT' AND existing IS NULL) OR (TG_OP = 'UPDATE' AND (existing IS NULL OR username != existing))) THEN
      CASE targetfield
        WHEN 'username' THEN
          NEW.username := username;
        WHEN 'modifiername' THEN
          NEW.modifiername := username;
        WHEN 'creatorname' THEN
          NEW.creatorname := username;
        .....
      END CASE;
    END IF;
    RETURN NEW;
  END;
$BODY$
LANGUAGE 'plpgsql' VOLATILE;

And using the trigger function:

CREATE TRIGGER some_trigger_name BEFORE UPDATE OR INSERT ON my_schema.my_table FOR EACH ROW EXECUTE PROCEDURE public.add_username('userid', 'username');

The way this works is the trigger function receives the original source field name (for example userid) and the target field name (username) via TG_ARGV. These are then used to fill in the (possibly) missing information. All this works nice enough, but how can I get rid of that CASE-mess? Is there a way to dynamically modify the values in the NEW record when I don't know the name of the field in advance (or rather it can be a lot of things)? It is in the targetfield parameter, but obviously NEW.targetfield does not work, nor something like NEW[targetfield] (like Javascript for example).

Any ideas how this could be accomplished? Besides using for instance PL/Python..

Teemu Karimerto
  • 395
  • 5
  • 12

2 Answers2

3

There are not simple plpgsql based solutions. Some possible solutions:

  1. Using hstore extension.
CREATE TYPE footype AS (a int, b int, c int);

postgres=# select row(10,20,30);
    row     
------------
 (10,20,30)
(1 row)

postgres=# select row(10,20,30)::footype #= 'b=>100';
  ?column?   
-------------
 (10,100,30)
(1 row)

hstore based function can be very simple:

create or replace function update_fields(r anyelement,
                                         variadic changes text[])
returns anyelement as $$
select $1 #= hstore($2);
$$ language sql;

postgres=# select * 
             from update_fields(row(10,20,30)::footype, 
                                'b', '1000', 'c', '800');
 a  |  b   |  c  
----+------+-----
 10 | 1000 | 800
(1 row)
  1. Some years ago I wrote a extension pl toolbox. There is a function record_set_fields:

pavel=# select * from pst.record_expand(pst.record_set_fields(row(10,20),'f1',33));
 name | value |   typ   
------+-------+---------
 f1   | 33    | integer
 f2   | 20    | integer
(2 rows)

Probably you can find some plpgsql only solutions based on some tricks with system tables and arrays like this, but I cannot to suggest it. It is too less readable and for not advanced user just only black magic. hstore is simple and almost everywhere so it should be preferred way.

On PostgreSQL 9.4 (maybe 9.3) you can try to black magic with JSON manipulations:

postgres=# select json_populate_record(NULL::footype, jo) 
              from (select json_object(array_agg(key),
                                       array_agg(case key when 'b' 
                                                          then 1000::text
                                                          else value 
                                                 end)) jo
       from json_each_text(row_to_json(row(10,20,30)::footype))) x;
 json_populate_record 
----------------------
 (10,1000,30)
(1 row)

So I am able to write function:

CREATE OR REPLACE FUNCTION public.update_field(r anyelement, 
                                               fn text, val text, 
                                               OUT result anyelement)
 RETURNS anyelement
 LANGUAGE plpgsql
AS $function$
declare jo json;
begin
  jo := (select json_object(array_agg(key), 
                            array_agg(case key when 'b' then val
                                               else value end)) 
            from json_each_text(row_to_json(r)));
  result := json_populate_record(r, jo);
end;
$function$

postgres=# select * from update_field(row(10,20,30)::footype, 'b', '1000');
 a  |  b   | c  
----+------+----
 10 | 1000 | 30
(1 row)

JSON based function should not be terrible fast. hstore should be faster.

Community
  • 1
  • 1
Pavel Stehule
  • 42,331
  • 5
  • 91
  • 94
  • I thought about trying JSON too, but you're right, it is probably not all that fast, and `hstore` was my second choice as well but I figured I'd ask for a "pure" solution first. Seems that I have no choice but to go with that then. Thanks for the help. – Teemu Karimerto Feb 03 '15 at 06:51
  • There is a hard coding in the json-based update_field function. For "case key when 'b' then", where 'b' should be replaced with fn. I was not able to improve your answer directly (2000 points are required for making changes less than 6 chars) – Geir Bostad Feb 23 '15 at 13:48
1

UPDATE/caution: Erwin points out that this is currently undocumented, and the docs indicates it should not be possible to alter records this way. Use Pavel's solution or hstore.

The json based solution is almost as fast as hstore when simplified. json_populate_record() modifies existing records for us, so we only have to create a json object from the keys we want to change.

See my similar answer, where you'll find benchmarks that compares the solutions.

The simplest solution requires Postgres 9.4:

SELECT json_populate_record (
      record
     ,json_build_object('key', 'new-value')
);

But if you only have Postgres 9.3, you can use casting instead of json_object:

SELECT json_populate_record( 
     record
    , ('{"'||'key'||'":"'||'new-value'||'"}')::json
);
Community
  • 1
  • 1
Geir Bostad
  • 886
  • 7
  • 18
  • 1
    This feature is currently undocumented. Even worse: [the manual is indicates otherwise](http://www.postgresql.org/docs/current/interactive/functions-json.html): `and target columns that do not match any JSON field will simply be NULL`. Pavel posted something related to this on pgsql-hackers: http://www.postgresql.org/message-id/flat/30106.1424989904@sss.pgh.pa.us#30106.1424989904@sss.pgh.pa.us – Erwin Brandstetter Mar 16 '15 at 15:14