>> Why didn't they just turn off compression for the relevant columns?
> They did --- with the pre-8.4 code, they had no choice, because the
> toast compressor would kick in if it could save even one byte on the
> total field size. That's clearly silly. We might have gone too far
> in the other direction with the current settings, but the point is
> that compression isn't always a good thing.
I agree with all of that. It seems to me that categorically refusing
to compress anything over 1M, as Alex seems to think the current
settings are doing, is clearly silly in the opposite direction. What
we want to avoid is trying to compress data that's already been
compressed - the early-failure path you added seems like the right
general idea, though perhaps a bit too simplistic. But the size of
the data is not evidence of anything, so I'm unclear why we think
that's relevant. It could also lead to some awfully strange behavior
if you have, say, a table with highly compressible data whose rows are
gradually updated with longer values over time. When they hit 1MB,
the storage requirements of the database will suddenly balloon for no
reason that will be obvious to the DBA.
> One point that nobody seems to have focused on is whether Alex's
> less-compressed table is faster or slower to access than the original.
> I dunno if he has any easy way of investigating that for his typical
> query mix, but it's certainly a fair question to ask.
Sure, but that's largely an orthogonal issue. Compression is
generally bad for performance, though there are certainly exceptions.
What it is good for is saving disk space, and that is why people use
it. If that's not why we're using it, then I'm puzzled.
In response to
pgsql-hackers by date
|Next:||From: Andrew Chernow||Date: 2009-01-03 05:17:16|
|Subject: Re: Significantly larger toast tables on 8.4?|
|Previous:||From: Greg Stark||Date: 2009-01-03 04:26:41|
|Subject: Re: posix_fadvise v22|