On Tue, Jun 29, 2010 at 10:03 PM, Fujii Masao <masao(dot)fujii(at)gmail(dot)com> wrote:
> This is true. But what I'm concerned about is:
> 1. Backend writes and fsyncs the WAL to the disk
> 2. The WAL on the disk gets corrupted
> 3. Walsender reads and sends that corrupted WAL image
> 4. The master crashes because of the corruption of the disk
> 5. The standby attempts to replay the corrupted WAL... PANIC
That sounds like design behavior to me.
>> Well, if we want to leave it up to the user/clusterware, the current
>> code is possibly adequate, although there are many different log
>> messages that could signal this situation, so coding it up might not
>> be too trivial.
> So the current code + user-settable-retry-count seems good to me.
> If the retry-count is set to 0, we will not see the repeated log
> messages. And we might need to provide the parameter specifying
> how the standby should behave after exceeding the retry-count:
> PANIC or stay-alive-without-retries.
> Choosing PANIC and using the retry-count = 5 would cover your proposed
I'm still having a hard time understanding why anyone would want to
configure this value as infinity.
The Enterprise Postgres Company
In response to
pgsql-hackers by date
|Next:||From: Robert Haas||Date: 2010-06-30 02:26:50|
|Subject: Re: Proposal for 9.1: WAL streaming from WAL buffers|
|Previous:||From: Robert Haas||Date: 2010-06-30 02:09:21|
|Subject: Re: Cannot cancel the change of a tablespace|