20

The data I'm receiving has timestamps down to nanoseconds (which we actually care about). Is there a way for Postgres timestamps to go to nanoseconds?

user1005909
  • 1,825
  • 2
  • 17
  • 27
  • Using a `bigint` to store timestamps would give you enough room to record nanoseconds. What does your input format look like? – Dave Gray Oct 18 '17 at 07:14
  • Related question: https://stackoverflow.com/questions/46790668/what-is-the-most-elegant-way-to-store-timestamp-with-nanosec-in-postgresql – nh2 Jul 11 '18 at 01:28

3 Answers3

14

As others have pointed out, Postgres doesn't provide such type out of the box. However, it's relatively simple to create an extension that supports nanosecond resolution due to the open-source nature of Postgres. I faced similar issues a while ago and created this timestamp9 extension for Postgres.

It internally stores the timestamp as a bigint and defines it as the number of nanoseconds since the UNIX epoch. It provides some convenience functions around it that make it easy to view and manipulate the timestamps. If you can live with the limited time range that these timestamps can have, between the year 1970 and the year 2262, then this is a good solution.

Disclaimer: I'm the author of the extension

fvannee
  • 762
  • 4
  • 10
3

Nope, but you could trim timestamps to milliseconds, and store nanosecond part to a separate column. You can create index on both, and view or function to return your wanted nanosecond timestamp, and you can even create index on your function.

jalmasi
  • 352
  • 3
  • 5
  • 8
    If you're going this route, you should store seconds and nanos, not millis and nanos. The nanos still fit in int32, and you'll outlast the int64 millisecond overflow. – kd8azz Dec 12 '17 at 01:53
0

On the one hand, the documented resolution is 1 microsecond.

On the other hand, PostgreSQL is open source. Maybe you can hack together something to support nanoseconds.

Mike Sherrill 'Cat Recall'
  • 91,602
  • 17
  • 122
  • 185