18

I'm running a really big query, that insert a lot of rows in table, almost 8 million of rows divide in some smaller querys, but in some moment appear that error : "I get an error "could not write block .... of temporary file no space left on device ..." using postgresql". I don't know if i need to delete temporary files after each query and how I can to do that, or if it is related with another issue.

Thank you

jacr1614
  • 1,250
  • 2
  • 14
  • 23
  • How does your query look like? How much space is left on device before running the query? – frlan Apr 22 '14 at 18:44
  • it is an insert query with some values() ... each query have almost 100.000 values clauses..., – jacr1614 Apr 22 '14 at 18:47
  • .. and how is the table defined, in which you do the insert? Please update the question – frlan Apr 22 '14 at 18:49
  • 3
    Well, the same way you'd fix any out of space error? Find more space? Or is there something about the state of the system and the nature of the query that makes you think there *should* be enough space, and that it's using way more than you expected? – IMSoP Apr 22 '14 at 18:57
  • More specifically, is *this query* taking up lots of space (i.e. free space drops dramatically when you run it) or is *the database as a whole* (permanently) taking a lot of space? – IMSoP Apr 22 '14 at 19:00
  • ok.., I need to insert a lot of data, almost 8 million of rows. So, The script in PHP execute inserts with groups of 100.000 rows. This script reach to insert about 7'500.000 rows and then show that error mentioned above – jacr1614 Apr 22 '14 at 19:31

3 Answers3

17

OK. As there are still some facts missing, an attempt to answer to maybe clarify the issue:

It appears that you are running out of disk space. Most likely because you don't have enough space on your disk. Check on a Linux/Unix df -h for example.

To show you, how this could happen: Having a table with maybe 3 integers the data alone will occupy about 12Byte. You need to add some overhead to it for row management etc. On another answer Erwin mentioned about 23Byte and linked to the manual for more information about. Also there might needs some padding betweens rows etc. So doing a little math:

Even with a 3 integer we will end up at about 40 Byte per row. Having in mind you wanted to insert 8,000,000 this will sum up to 320,000,000Byte or ~ 300MB (for our 3 integer example only and very roughly).

Now giving, you have a couple of indexes on this table, the indexes will also grow during the inserts. Also another aspect might could be bloat on the table and indexes which might can be cleared with a vacuum.

So what's the solution:

  1. Provide more disk space to your database
  2. Split your inserts a little more and ensure, vacuum is running between them
Community
  • 1
  • 1
frlan
  • 6,950
  • 3
  • 31
  • 72
  • if you *only* run inserts then running vacuum between them won't change anything. `vacuum` only cleans out obsolete rows that have been deleted or updated. `insert` will never create rows that are eligible for vacuuming. –  Apr 23 '14 at 07:17
  • We have now clue how the query set is really looking like so I think it wont hurt. – frlan Apr 23 '14 at 07:19
  • I'm testing using System('df -h >> disk_usage.log') to know more about that.. and . I'm not sure it vaccum works in this case.. – jacr1614 Apr 23 '14 at 14:03
  • +1 for granular inserts. The OP can use them to estimate the amount of space they need to come up with to do the full insert. However, `VACUUM` won't ever return storage to the OS (`VACUUM FULL` will, but it requires a lot of overhead space to work, and it won't help unless there are a lot of dead rows). – Daniel Lyons Apr 23 '14 at 19:53
  • 1
    I am having the very same error with roughly 450GB of space on my disk, although the table has few MBs ... – Eduardo Jul 05 '17 at 08:28
10

Inserting data or index(create) always needs temp_tablespaces, which determines the placement of temporary tables and indexes, as well as temporary files that are used for purposes such as sorting large data sets.according to your error, it meant that your temp_tablespace location is not enough for disk space .

To resolve this problem you may need these two ways:
1.Re-claim the space of your temp_tablespace located to, default /PG_DATA/base/pgsql_tmp

2. If your temp_tablespace space still not enough for temp storing you can create the other temp tablespace for that database:

create tablespace tmp_YOURS location '[your enough space location';
alter database yourDB set temp_tablespaces = tmp_YOURS ;
GRANT ALL ON TABLESPACE tmp_YOURS to USER_OF_DB;

then disconnect the session and reconnect it.

BongSey
  • 171
  • 1
  • 6
1

The error is quite self-explanatory. You are running a big query yet you do not have enough disk space to do so. If postgresql is installed in /opt...check if you have enough space to run the query. If not LIMIT the output to confirm you are getting the expected output and then proceed to run the query and write the output to a file.

Brayoni
  • 15
  • 1
  • 6