From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: alternative compression algorithms? |
Date: | 2015-04-20 13:03:15 |
Message-ID: | 5534F913.2040404@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 04/20/15 05:07, Andres Freund wrote:
> Hi,
>
> On 2015-04-19 22:51:53 +0200, Tomas Vondra wrote:
>> The reason why I'm asking about this is the multivariate statistics patch -
>> while optimizing the planning overhead, I realized that considerable amount
>> of time is spent decompressing the statistics (serialized as bytea), and
>> using an algorithm with better decompression performance (lz4 comes to mind)
>> would help a lot. The statistics may be a few tens/hundreds kB, and in the
>> planner every millisecond counts.
>
> I've a hard time believing that a different compression algorithm is
> going to be the solution for that. Decompressing, quite possibly
> repeatedly, several hundreds of kb during planning isn't going to be
> fun. Even with a good, fast, compression algorithm.
Sure, it's not an ultimate solution, but it might help a bit. I do have
other ideas how to optimize this, but in the planner every milisecond
counts. Looking at 'perf top' and seeing pglz_decompress() in top 3.
regards
Tomas
From | Date | Subject | |
---|---|---|---|
Next Message | David Steele | 2015-04-20 13:17:34 | Re: Auditing extension for PostgreSQL (Take 2) |
Previous Message | Michael Paquier | 2015-04-20 12:01:46 | Re: Supporting TAP tests with MSVC and Windows |