Re: batch write of dirty buffers

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Qingqing Zhou" <zhouqq(at)cs(dot)toronto(dot)edu>
Cc: pgsql-hackers(at)postgresql(dot)org
Subject: Re: batch write of dirty buffers
Date: 2004-06-20 13:21:52
Message-ID: 27073.1087737712@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

"Qingqing Zhou" <zhouqq(at)cs(dot)toronto(dot)edu> writes:
> In checkpoint and background writer, we flush out dirty buffer pages one
> page one time. Is it possible to do in a batch mode? That is, try to find
> out the continous page(same tblNode, relNode, adjacent blockNum), then write
> them together?

What for? The kernel will have its own ideas about scheduling the
physical writes, anyway. We are not flushing anything directly to disk
here, we are just pushing pages out to kernel buffers.

> There are other two points may need attentions. One is in function
> StartBufferIO(), which asserts InProgressBuf, that is, we can just do one
> page write one time. I am not quite sure the consequence if we remove this
> variable. The other is that since we will acquire many locks on the buffer
> page, so we may have to increase MAX_SIMUL_LWLOCKS. This should not be a
> problem.

If the bgwriter tries to lock more than one shared buffer at a time,
you will inevitably get deadlocks. I don't actually see the point
of doing that anyway, even assuming that it's worth trying to do the
writes in block-number order. It would hardly ever be the case that
successive pages would be located in adjacent shared buffers, and so
you'd almost always end up issuing separate write commands anyway.

regards, tom lane

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Oliver Jowett 2004-06-20 13:54:49 Re: [HACKERS] JDBC prepared statements: actually not server prepared
Previous Message Tatsuo Ishii 2004-06-20 12:58:59 Re: [PATCHES] ALTER TABLE ... SET TABLESPACE