On Tue, Jan 8, 2013 at 10:20 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, Jan 8, 2013 at 4:04 AM, Takeshi Yamamuro
> <yamamuro(dot)takeshi(at)lab(dot)ntt(dot)co(dot)jp> wrote:
>> Apart from my patch, what I care is that the current one might
>> be much slow against I/O. For example, when compressing
>> and writing large values, compressing data (20-40MiB/s) might be
>> a dragger against writing data in disks (50-80MiB/s). Moreover,
>> IMHO modern (and very fast) I/O subsystems such as SSD make a
>> bigger issue in this case.
> What about just turning compression off?
I've been relying on compression for some big serialized blob fields
for some time now. I bet I'm not alone, lots of people save serialized
data to text fields. So rather than removing it, I'd just change the
default to off (if that was the decision).
However, it might be best to evaluate some of the modern fast
compression schemes like snappy/lz4 (250MB/s per core sounds pretty
good), and implement pluggable compression schemes instead. Snappy
wasn't designed for nothing, it was most likely because it was
necessary. Cassandra (just to name a system I'm familiar with) started
without compression, and then it was deemed necessary to the point
they invested considerable time into it. I've always found the fact
that pg does compression of toast tables quite forward-thinking, and
I'd say the feature has to remain there, extended and modernized,
maybe off by default, but there.
In response to
pgsql-hackers by date
|Next:||From: Claudio Freire||Date: 2013-01-08 14:53:29|
|Subject: Re: proposal: Set effective_cache_size to greater of .conf
|Previous:||From: Merlin Moncure||Date: 2013-01-08 14:39:09|
|Subject: proposal: Set effective_cache_size to greater of .conf value, shared_buffers|