Re: log chunking broken with large queries under load

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: log chunking broken with large queries under load
Date: 2012-04-02 16:29:32
Message-ID: 4F79D3EC.6020207@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On 04/02/2012 12:00 PM, Tom Lane wrote:
> Andrew Dunstan<andrew(at)dunslane(dot)net> writes:
>> On 04/01/2012 06:34 PM, Andrew Dunstan wrote:
>>> Some of my PostgreSQL Experts colleagues have been complaining to me
>>> that servers under load with very large queries cause CSV log files
>>> that are corrupted,
>> We could just increase CHUNK_SLOTS in syslogger.c, but I opted instead
>> to stripe the slots with a two dimensional array, so we didn't have to
>> search a larger number of slots for any given message. See the attached
>> patch.
> This seems like it isn't actually fixing the problem, only pushing out
> the onset of trouble a bit. Should we not replace the fixed-size array
> with a dynamic data structure?
>
>

"A bit" = 10 to 20 times - more if we set CHUNK_STRIPES higher. :-)

But maybe your're right. If we do that and stick with my two-dimensional
scheme to keep the number of probes per chunk down, we'd need to reorg
the array every time we increased it. That might be a bit messy, but
might be ok. Or maybe linearly searching an array of several hundred
slots for our pid for every log chunk that comes in would be fast enough.

cheers

andrew

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2012-04-02 16:33:11 Re: measuring lwlock-related latency spikes
Previous Message David Johnston 2012-04-02 16:27:38 Fwd: [HACKERS] Switching to Homebrew as recommended Mac install?