2

When I insert I am getting integer out of range because my id/primary key was mistakenly created as an int instead of a bigint or bigserial. I tried:

ALTER TABLE tbl ALTER COLUMN id TYPE BIGINT;

But I get the following error because my free disk space isn't big enough.

ERROR: could not extend file "base/16401/3275205": No space left on device HINT: Check free disk space. SQL state: 53100

I can't increase the disk space right now, for frustrating reasons I won't go into.

I also tried reusing the ids (I delete a lot of records from this table so there are big gaps) by doing these to start my seq over: https://dba.stackexchange.com/questions/111823/compacting-a-sequence-in-postgresql

But for solution #1 in that link: I assume I don't have the disk space. The table is 117GB and I have about 24GB available in ...data/base. I do have 150GB available where my temp files are being stored (a different mount), which is not the default configuration, but was done so I could conserve space for database storage in ...data/base. If I could create the table in Temp file location, that might work, but I don't know how to do that.

for solution #2 in that link: When I get to the update part, I get this in pgAdmin4:

<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<title>500 Internal Server Error</title>
<h1>Internal Server Error</h1>
<p>The server encountered an internal error and was unable to complete your request.  Either the server is overloaded or there is an error in the application.</p>

However the query is still running when I run:

select pid,query,state,wait_event,* from pg_stat_activity where state <> 'idle'

And I get no server log for the update query that fails.

I eventually killed that update query thinking that it would fail eventually anyway. (I am running this one again and will let it run with html error above unless someone else has a better idea.)

for solution #3 in that link: I have 16GM of RAM, so not enough.

Next from here: How to reset sequence in postgres and fill id column with new data?

I tried this:

UPDATE table SET id = DEFAULT;
ALTER SEQUENCE seq RESTART;
UPDATE table SET id = DEFAULT;

ERROR: integer out of range

this creates a duplicate key when you try to insert:

ALTER SEQUENCE seq RESTART WITH 1;
UPDATE t SET idcolumn=nextval('seq');

Anything else I can try?

PostgreSQL 9.6

mountainclimber11
  • 1,339
  • 1
  • 28
  • 51
  • Do you have any space on other drives where you could make a tablespace and move that one table there to work on it? Otherwise you're stuck. – Scott Marlowe Oct 10 '17 at 16:15
  • 1
    create tablespace somewhere, `alter set tablespace new_tbs`, `alter type bigint` and move back to original tablespace pg_default?.. – Vao Tsun Oct 10 '17 at 16:16
  • @VaoTsun - in concept, yes. I have never done anything with tablespaces before. is `new_tbs` a path? Looking at docs now. – mountainclimber11 Oct 10 '17 at 16:23
  • 1
    Yeah practice it on a spare machine first. You don't wanna learn to tablespace on a production server. It's pretty easy though. You make a directory, set ownership to postgres, Init it as tablespace, create a tablespace on it, and then alter table set tablespace to get it there. Really handy stuff – Scott Marlowe Oct 10 '17 at 16:25
  • Dump the data (or pg_dump) to .tsv, drop, recreate, load from .tsv or dump. – joop Oct 10 '17 at 17:16
  • @ScottMarlowe - thank you. I followed your instructions and move it over. But I get that html error in pgadmin4 but the `ALTER TABLE tbl ALTER COLUMN id TYPE BIGINT;` persists active in `pg_stat_activity`. Let it keep working or kill it? Also, b/c I am limited on where I can put it i got this warning: `tablespace location should not be inside the data directory` – mountainclimber11 Oct 10 '17 at 17:18
  • @joop - please see my previous comment to Scott. Should I keep that `ALTER TABLE tbl ALTER COLUMN id TYPE BIGINT;` query going or kill it and try your dump idea? Hoping changing the type will work... – mountainclimber11 Oct 10 '17 at 17:20
  • I haven't ran any queries yet, but the namespace trick Scott suggested worked. I'll provide an answer, but if one of want credit post one too and I'll give credit and take mine down Thanks!!! – mountainclimber11 Oct 10 '17 at 18:41
  • @joop suggestion would be harder to implement with multiple references (dependencies) to a big table, but would require twice less downtime I suppose - you load data once, while alter table set tablespace would lock it twice... – Vao Tsun Oct 10 '17 at 21:45
  • @VaoTsun In both cases you could get problems with integer FK's trying to point to the new bigint field. Personally I'd just { dump to .tsv + editing the DDL+ dropping the affected objects + reloading the .tsv} – joop Oct 12 '17 at 09:12

1 Answers1

1

Scott Marlowe and Vao Tsun comments worked:

on (linux) server open a terminal

navigate to where want a the new namespace to be

make a directory: mkdir dirname

give ownership to postgres: chown postgres:postgres dirname

create table: CREATE TABLESPACE new_tbl_space LOCATION '/path/dirname'

put the table in the tablespace: alter table tbl set tablespace '/path/dirname'

do what was taking up so much disk space:ALTER TABLE tbl ALTER COLUMN id TYPE BIGINT;

change the tablespace back: alter table tbl set tablespace pg_default

remove the tablespace: I did that in pgadmin4 in the Tablespaces node/object

(That was from memory. Let me know if I missed something.)

Edit: This has the side effect of rewriting the entire table like a full vacuum freeing up any dead disk space.

mountainclimber11
  • 1,339
  • 1
  • 28
  • 51
  • @ScottMarlowe - If interested, my table size dropped. Trying to figure out why exactly and/or find a better strat: https://stackoverflow.com/questions/46674520/best-disk-saving-strategy-for-replacement-inserts – mountainclimber11 Oct 10 '17 at 19:33
  • The whole table has to be locked and rewritten when you do this, so likely it cleaned up a lot of dead tuples just like vacuum full or cluster would. – Scott Marlowe Oct 10 '17 at 19:47
  • @ScottMarlowe - Okay. Is there a way to reduce bloat like this without doing a vacuum full? Also, this is a surprise to me given what the pg docs say about not really needing vacuum full if you plan to write to the table again. – mountainclimber11 Oct 10 '17 at 19:50
  • Well some amount of dead space is a good thing. PostgreSQL can update many rows in place if there's empty space in the block its in, which is way faster than writing to a new block and updating all the indexes. This is called Heap Only Tuple updates and is a very good feature for performance. OTOH having a LOT of dead space makes it slower, as now the db has to trundle through all that extra space for nothing when scanning blocks. The only way to do this is vacuum full / alter table (requiring a rewrite) and cluster. All of which are blocking operations. – Scott Marlowe Oct 10 '17 at 19:53
  • 1
    What you're looking for with dead space is a reasonable equilibrium. Somewhere around 10 to 20% empty / dead space is a good number. So if you have a table that's say 100MB with no dead space and it bloats up to 120 to 150MB then there's enough dead space for updates, but not so much it hurts. But if it's measuring 500MB then there's too much, and vacuum isn't keeping up. There are vacuum_autovacuum_ settings you can adjust to make autovacuum more aggressive. Specifically set vacuum_autovacuum_cost_delay lower, from 20ms to 3 or 4ms. Makes a big difference. Be careful not to flood your IO tho – Scott Marlowe Oct 10 '17 at 19:57
  • @ScottMarlowe - in the end, a `vacuum full` was unavoidable. No matter how many `autovacuum`s or manual `vacuum`s I did and the timing, a `vacuum full` reduces disk space by about 35%. – mountainclimber11 Oct 21 '17 at 11:00