Re: Compression of full-page-writes

From: KONDO Mitsumasa <kondo(dot)mitsumasa(at)lab(dot)ntt(dot)co(dot)jp>
To: Haribabu kommi <haribabu(dot)kommi(at)huawei(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Fujii Masao <masao(dot)fujii(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Compression of full-page-writes
Date: 2013-10-08 09:51:34
Message-ID: 5253D5A6.7080409@lab.ntt.co.jp
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

(2013/10/08 17:33), Haribabu kommi wrote:
> The checkpoint_timeout and checkpoint_segments are increased to make sure no checkpoint happens during the test run.
Your setting is easy occurred checkpoint in checkpoint_segments = 256. I don't
know number of disks in your test server, in my test server which has 4 magnetic
disk(1.5k rpm), postgres generates 50 - 100 WALs per minutes.

And I cannot understand your setting which is sync_commit = off. This setting
tend to cause cpu bottle-neck and data-loss. It is not general in database usage.
Therefore, your test is not fair comparison for Fujii's patch.

Going back to my DBT-2 benchmark, I have not got good performance (almost same
performance). So I am checking hunk, my setting, or something wrong in Fujii's
patch now. I am going to try to send test result tonight.

Regards,
--
Mitsumasa KONDO
NTT Open Source Software Center

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavan Deolasee 2013-10-08 10:00:50 Re: Patch for fail-back without fresh backup
Previous Message Andres Freund 2013-10-08 09:49:11 Re: Compression of full-page-writes