From: | nunks <nunks(dot)lol(at)gmail(dot)com> |
---|---|
To: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | [pgsql-admin] "Soft-hitting" the 1600 column limit |
Date: | 2018-06-06 16:39:49 |
Message-ID: | CACq6szQbQDG6_mThyH32=6J0e99-KdqOrhZMm9QwvGoVFzNbBw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hello!
I'm trying to support an application in production at work, and for some
obscure reason the developer made it drop and re-create a column
periodically.
I know this is a bad practice (to say the least), and I'm telling them to
fix it, but after the 1600th drop/add cycle, PostgreSQL starts giving out
the column limit error:
ERROR: tables can have at most 1600 columns
I reproduced this behavior in PostgreSQL 10.3 with a simple bash loop and a
two-column table, one of which is fixed and the other is repeatedly dropped
and re-created until the 1600 limit is reached.
To me this is pretty cool, since I can use this limit as leverage to push
the developers to the right path, but should Postgres be doing that? It's
as if it doesn't decrement some counter when a column is dropped.
Many thanks!
Bruno
----------
“Life beats down and crushes the soul and art reminds you that you have one.”
- Stella Adler
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2018-06-06 16:51:15 | Re: [pgsql-admin] "Soft-hitting" the 1600 column limit |
Previous Message | pavan95 | 2018-06-06 08:53:01 | Re: How to get the postmaster shut downtime dynamically and insert into a table? |