From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | nunks <nunks(dot)lol(at)gmail(dot)com> |
Cc: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: [pgsql-admin] "Soft-hitting" the 1600 column limit |
Date: | 2018-06-06 16:51:15 |
Message-ID: | CAKFQuwbuurPFxQ=ts2znAC-STpF9AW7C91qKXsGThWQr8u4Faw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Wed, Jun 6, 2018 at 9:39 AM, nunks <nunks(dot)lol(at)gmail(dot)com> wrote:
> I reproduced this behavior in PostgreSQL 10.3 with a simple bash loop and
> a two-column table, one of which is fixed and the other is repeatedly
> dropped and re-created until the 1600 limit is reached.
>
> To me this is pretty cool, since I can use this limit as leverage to push
> the developers to the right path, but should Postgres be doing that? It's
> as if it doesn't decrement some counter when a column is dropped.
>
This is working as expected. When dropping a column, or adding a new
column that can contain nulls, PostgreSQL does not, and does not want to,
rewrite the physically stored records/table. Thus it must be capable of
accepting records formed for prior table versions which means it must keep
track of those now-deleted columns.
I'm sure that there is more to it that requires reading, and understanding,
the source code to comprehend; but that does seem to explain why its works
the way it does.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2018-06-06 16:51:56 | Re: [pgsql-admin] "Soft-hitting" the 1600 column limit |
Previous Message | nunks | 2018-06-06 16:39:49 | [pgsql-admin] "Soft-hitting" the 1600 column limit |