Jasen Betts <jasen(at)xnet(dot)co(dot)nz> writes:
> On 2011-05-01, Mark Morgan Lloyd <markMLl(dot)pgsql-general(at)telemetry(dot)co(dot)uk> wrote:
>> Somebody is making a very specific claim that Postgres can support a
>> limited number of rows:
>> "INPS (a data forensics team) said that there is 7 main Databases all
>> hosted at different data centers but linked over a type of 'cloud' Each
>> database uses PostGRESSQL which would mean the most amount of data each
>> database could hold with no stability issues is aproximitely equal to
>> that of 10,348,439 Rows" http://pastebin.com/MtX1MDdh
>> Does anybody have any idea where they've got hold of this figure?
> the figure is within 1% of the maximun size for data stored in text
> (or bytea) column.
No it isn't; the max size per field is 1GB. Although actually
manipulating such field values will probably not work very well unless
you have a 64-bit machine, else you'll hit address-space issues.
I could believe that a specific application using specific fields in
a specific way in a 32-bit machine might start to hit "out of memory"
errors for field widths somewhere in the tens-of-MB range. But the
stated claim is about number of rows, not row width, and the exactness
and breadth of the claim is, well, ridiculous on its face.
I think INPS's level of knowledge about PG must be about as good as
their ability to spell it :-(
BTW, there *is* a hard limit of 32TB per table, arising from the limited
size of BlockNumber. But it's hard to believe that INPS's claim has
anything to do with that.
regards, tom lane
In response to
pgsql-general by date
|Next:||From: Tom Lane||Date: 2011-05-01 16:41:46|
|Subject: Re: Values larger than 1/3 of a buffer page cannot be indexed (hstore) |
|Previous:||From: Stefan Keller||Date: 2011-05-01 14:18:05|
|Subject: Re: Values larger than 1/3 of a buffer page cannot be