Re: Checksums by default?

From: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>
To: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
Cc: Stephen Frost <sfrost(at)snowman(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Petr Jelinek <petr(dot)jelinek(at)2ndquadrant(dot)com>, Magnus Hagander <magnus(at)hagander(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Checksums by default?
Date: 2017-01-23 08:57:44
Message-ID: CAA4eK1KMASdWM_RW6wkE85a4+sxXf=x8BRkbGiAsPcVK3z=DyQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Jan 23, 2017 at 1:18 PM, Tomas Vondra
<tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> On 01/23/2017 08:30 AM, Amit Kapila wrote:
>>
>>
>> I think if we can get data for pgbench read-write workload when data
>> doesn't fit in shared buffers but fit in RAM, that can give us some
>> indication. We can try by varying the ratio of shared buffers w.r.t
>> data. This should exercise the checksum code both when buffers are
>> evicted and at next read. I think it also makes sense to check the
>> WAL data size for each of those runs.
>>
>
> Yes, I'm thinking that's pretty much the worst case for OLTP-like workload,
> because it has to evict buffers from shared buffers, generating a continuous
> stream of writes. Doing that on good storage (e.g. PCI-e SSD or possibly
> tmpfs) will further limit the storage overhead, making the time spent
> computing checksums much more significant. Makes sense?
>

Yeah, I think that can be helpful with respect to WAL, but for data,
if we are considering the case where everything fits in RAM, then
faster storage might or might not help.

--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Kuntal Ghosh 2017-01-23 09:16:26 Re: Passing query string to workers
Previous Message Haribabu Kommi 2017-01-23 08:22:54 Re: Parallel bitmap heap scan