4

I have a question regarding Postgres autovacuum / vacuum settings. I have a table with 4.5 billion rows and there was a period of time with a lot of updates resulting in ~ 1.5 billion dead tuples. At this point autovacuum was taking a long time (days) to complete. When looking at the pg_stat_progress_vacuum view I noticed that:

max_dead_tuples = 178956970

resulting in multiple index rescans (index_vacuum_count)
According to docs - max_dead_tuples is a number of dead tuples that we can store before needing to perform an index vacuum cycle, based on maintenance_work_mem.
According to this one dead tuple requires 6 bytes of space.
So 6B x 178956970 = ~1GB
But my settings are

maintenance_work_mem = 20GB
autovacuum_work_mem = -1

So what am I missing? why didn't all my 1.5b dead tuples fit in max_dead_tuples, since 20GB should give enough space, and why there were multiple runs necessary?

Rio
  • 107
  • 1
  • 9

1 Answers1

3

There is a hard-coded limit of 1GB for the number of dead tuples in one VACUUM cycle, see the source:

/*
 * Return the maximum number of dead tuples we can record.
 */
static long
compute_max_dead_tuples(BlockNumber relblocks, bool useindex)
{
    long        maxtuples;
    int         vac_work_mem = IsAutoVacuumWorkerProcess() &&
    autovacuum_work_mem != -1 ?
    autovacuum_work_mem : maintenance_work_mem;

    if (useindex)
    {
        maxtuples = MAXDEADTUPLES(vac_work_mem * 1024L);
        maxtuples = Min(maxtuples, INT_MAX);
        maxtuples = Min(maxtuples, MAXDEADTUPLES(MaxAllocSize));

        /* curious coding here to ensure the multiplication can't overflow */
        if ((BlockNumber) (maxtuples / LAZY_ALLOC_TUPLES) > relblocks)
            maxtuples = relblocks * LAZY_ALLOC_TUPLES;

        /* stay sane if small maintenance_work_mem */
        maxtuples = Max(maxtuples, MaxHeapTuplesPerPage);
    }
    else
        maxtuples = MaxHeapTuplesPerPage;

    return maxtuples;
}

MaxAllocSize is defined in src/include/utils/memutils.h as

#define MaxAllocSize   ((Size) 0x3fffffff) /* 1 gigabyte - 1 */

You could lobby on the pgsql-hackers list to increase the limit.

Laurenz Albe
  • 209,280
  • 17
  • 206
  • 263