Re: trap instead of error on 32 TiB table

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Christoph Berg <christoph(dot)berg(at)credativ(dot)de>
Cc: PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: trap instead of error on 32 TiB table
Date: 2021-09-09 13:44:55
Message-ID: 3568228.1631195095@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Christoph Berg <christoph(dot)berg(at)credativ(dot)de> writes:
> I was wondering what happens when the 32 TiB per table limit is
> reached, so I faked 32767 1 GiB sparse files using dd and then tried
> inserting more rows.

> On a cassert-enabled build I got:

> TRAP: FailedAssertion("tagPtr->blockNum != P_NEW", File: "./build/../src/backend/storage/buffer/buf_table.c", Line: 125)

Can you provide a stack trace from that?

(or else a recipe for reproducing the bug ... I'm not excited
about reverse-engineering the details of the method)

regards, tom lane

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Christoph Berg 2021-09-09 14:04:29 Re: trap instead of error on 32 TiB table
Previous Message Tomas Vondra 2021-09-09 13:37:59 Re: Use generation context to speed up tuplesorts