I have this locally defined python function that works fine when inserting data into a redshift table:
def _insert_data(table_name, values_list):
insert_base_sql = f"INSERT INTO {table_name} VALUES"
insert_sql = insert_base_sql + ','.join([str(row) for row in values_list])
<run_local_python_code_to_execute_insert_sql>
The values_list
is a list of tuples, each with the same number of elements as columns in table_name
(even though I don't explicitly assert/check that in this function). However, I'm failing to find a way to insert a NULL
value for a smallint
column. Here's the schema for the table in question (no DEFAULT
values were assigned to columns at table creation):
schemaname | tablename | column | type | encoding | distkey | sortkey | notnull
------------+---------------------+--------------+-----------------------+----------+---------+---------+---------
public | table | col1 | bigint | lzo | t | 1 | f
public | table | col2 | date | lzo | f | 2 | f
public | table | col3 | smallint | lzo | f | 3 | f
public | table | col4 | smallint | lzo | f | 4 | f
public | table | col5 | double precision | none | f | 0 | f
public | table | col6 | bigint | lzo | f | 0 | f
public | table | col7 | character varying(48) | bytedict | f | 0 | f
I'm specifically trying to insert NULL
values for col3
and col4
; I've tried creating tuples with ''
and 'NULL'
but run into this error: Error: invalid input syntax for integer: "NULL"
.
For what it's worth, here's what a scrubbed row ends up looking like in the INSERT
statement: ('bigint_value', 'dt_value', 'NULL', 'NULL', 'double_value', 'bigint_value', 'string_name')