5

I am using postgres and I have a table with a column whose data type is timestamp without time zone.

I would like to change the data type to bigint. I am trying to store number of seconds since 1970 in the column. So something big like 1397597908756.

I am using python, something like:

d = dict() # create a dictionary, has key 'timestamp'
#get data from server and store in array 
d.update(dict(timestamp=data[1]) #data[1] has the number of seconds 

I touch server many times so storing in dictionary is essential. The query is:

cursor.execute("INSERT into tablename columname VALUES (%s)", (quote['timestamp'];

At this point, an exception is thrown:

invalid input syntax for type timestamp: 1397597908756

So I tried to change the data type from timestamp without timezone to bigint. I did:

ALTER TABLE tablename ALTER COLUMN columnname
SET DATA TYPE bigint USING updated::bigint;

I got the following error:

ERROR: cannot cast type timestamp without time zone to bigint

Craig Ringer
  • 307,061
  • 76
  • 688
  • 778
Naz
  • 189
  • 2
  • 10
  • Please add the SQL that caused the first error, the programming language you're using, and the PostgreSQL version you're using. – Craig Ringer Apr 16 '14 at 04:02
  • @CraigRinger The SQL query given above is causing the exception 'invalid input syntax for type timestamp'. The language I am using is python. The postgresql version is 9.3. – Naz Apr 16 '14 at 04:51
  • If your only problem is *invalid input syntax* while inserting, try to use the following: `INSERT ... VALUES (to_timestamp($1::bigint))`. If you really want to change your column's type, see the answers below. Or you could use datetime objects in Pyton. – pozs Apr 16 '14 at 08:19

2 Answers2

12
ALTER TABLE tablename ALTER COLUMN updated
TYPE bigint USING EXTRACT(EPOCH FROM updated);

The manual:

for date and timestamp values, the number of seconds since 1970-01-01 00:00:00 local time;

EXTRACT returns float8 with up to 6 fractional digits (microsecond resolution), which will be rounded when coerced into bigint. Typically, you'd rather truncate with date_trunc() first, or multiply with 1000000 before the cast to get microseconds instead of seconds (and no rounding losses). See:

Erwin Brandstetter
  • 605,456
  • 145
  • 1,078
  • 1,228
4

Erwin's answer is correct, I just want to address another aspect.

I am trying to store number of seconds since 1970 in the column

Please don't do that. It's annoying to query and is of no storage benefit over just using a timestamp.

Store timestamp without time zone. Or, preferably, use timestamp with time zone so it's properly adjusted for the TimeZone setting of the client.

If the client app needs epoch seconds, it can always select extract(epoch from my_field), .... But really, your app should be able to consume dates properly anyway.

It isn't clear what problem you're attempting to solve in your application by switching to storing raw epoch seconds; there may be cases where you have to. But personally, rather than do something like this, I'd instead define an updatable view that returned epoch seconds and converted the input bigints into a timestamp for storage in the underlying table. So the application would think it had a table with epoch seconds, but it was really working with timestamps. Of course, I'd prefer to just make my app work with timestamps properly in the first place, converting to epoch seconds in the application if some client interface required it.

Update:

In your case, just insert a datetime.datetime object.

import datetime

# Convert epoch seconds into a Python datetime.datetime object that psycopg2 will
# understand as a date and insert as a PostgreSQL timestamp value.
ts = datetime.datetime.fromtimestamp(d['timestamp'])

cursor.execute("INSERT into tablename columname VALUES (%s)", (ts,) )
Craig Ringer
  • 307,061
  • 76
  • 688
  • 778
  • Right. But the problem is that I am touching some server and making a request. It sends me back the response with the timestamp as number of seconds. I need to store that in my table. That's why I got the error in the first place. How do I store "number of seconds" as timestamp without timezone in my table?? – Naz Apr 16 '14 at 03:59
  • @Naz I see. Rather than changing your schema to acomodate what the remote server sends you, you should convert the data its sends you into the date data type in your programming language, which will then store correctly. You haven't mentioned your language or shown the query you're running, so it's hard to be more specific than that. – Craig Ringer Apr 16 '14 at 04:01
  • I am using python. so its something like this: – Naz Apr 16 '14 at 04:12
  • I am using python. so its something like this: d = dict() //create a dictionary, has key 'timestamp' //get data from server and store in array d.update(dict(timestamp=data[1])//data[1] has the number of seconds I touch server many times so storing in dictionary is essential. The query is: cursor.execute("INSERT into tablename columname VALUES (%s)", (quote['timestamp']; At this point, an exception is thrown as mentioned in my initial question. – Naz Apr 16 '14 at 04:24
  • Please **edit the question** to add new code examples etc. That way they're readable. You can then respond with a comment to send notifications to people you're talking to. see my edit. – Craig Ringer Apr 16 '14 at 04:41
  • Edited answer with how you should be doing it. To learn more, see the psycopg2 documentation. I suspect you might also be running `insert` statements in a loop, in which case you should be looking into psycopg2's `copy_from` support. – Craig Ringer Apr 16 '14 at 04:49
  • @Naz Glad to help, please accept the answer (tick outline under the voting arrows in top left). – Craig Ringer Apr 16 '14 at 05:52