Re: Compression of full-page-writes

From: Andres Freund <andres(at)2ndquadrant(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Simon Riggs <simon(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Compression of full-page-writes
Date: 2014-12-08 19:21:52
Message-ID: 20141208192152.GB24437@alap3.anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 2014-12-08 14:09:19 -0500, Robert Haas wrote:
> > records, just fpis. There is no evidence that we even want to compress
> > other record types, nor that our compression mechanism is effective at
> > doing so. Simple => keep name as compress_full_page_writes
>
> Quite right.

I don't really agree with this. There's lots of records which can be
quite big where compression could help a fair bit. Most prominently
HEAP2_MULTI_INSERT + INIT_PAGE. During initial COPY that's the biggest
chunk of WAL. And these are big and repetitive enough that compression
is very likely to be beneficial.

I still think that just compressing the whole record if it's above a
certain size is going to be better than compressing individual
parts. Michael argued thta that'd be complicated because of the varying
size of the required 'scratch space'. I don't buy that argument
though. It's easy enough to simply compress all the data in some fixed
chunk size. I.e. always compress 64kb in one go. If there's more
compress that independently.

Greetings,

Andres Freund

--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Josh Berkus 2014-12-08 19:30:30 Re: On partitioning
Previous Message Robert Haas 2014-12-08 19:09:19 Re: Compression of full-page-writes