From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Jaime Casanova <jaime(dot)casanova(at)2ndquadrant(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Patch: Write Amplification Reduction Method (WARM) |
Date: | 2017-03-21 12:04:11 |
Message-ID: | CA+TgmoYNS3SNjk5DuFZw4E2POSpk+FjmtVqDu9-tOttS1FPgyw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Mar 21, 2017 at 6:56 AM, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
>> Hmm, that test case isn't all that synthetic. It's just a single
>> column bulk update, which isn't anything all that crazy, and 5-10%
>> isn't nothing.
>>
>> I'm kinda surprised it made that much difference, though.
>>
>
> I think it is because heap_getattr() is not that cheap. We have
> noticed the similar problem during development of scan key push down
> work [1].
Yeah. So what's the deal with this? Is somebody working on figuring
out a different approach that would reduce this overhead? Are we
going to defer WARM to v11? Or is the intent to just ignore the 5-10%
slowdown on a single column update and commit everything anyway? (A
strong -1 on that course of action from me.)
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2017-03-21 12:07:00 | Re: WIP: [[Parallel] Shared] Hash |
Previous Message | Michael Banck | 2017-03-21 11:52:50 | Re: Create replication slot in pg_basebackup if requested and not yet present |