Re: literature on write-ahead logging

From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: literature on write-ahead logging
Date: 2011-06-09 15:13:44
Message-ID: 1307631773-sup-7045@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Excerpts from Robert Haas's message of jue jun 09 10:55:45 -0400 2011:
> On Thu, Jun 9, 2011 at 10:34 AM, Alvaro Herrera
> <alvherre(at)commandprompt(dot)com> wrote:

> > Slower than sleeping?  Consider that this doesn't need to be done for
> > each record insertion, only when you need to flush (maybe more than
> > that, but I think that's the lower limit).
>
> Maybe. I'm worried that if someone jacks up max_connections to 1000
> or 5000 or somesuch it could get pretty slow.

Well, other things are going to get pretty slow as well, not just this
one, which is why we suggest using a connection pooler with a reasonable
limit.

On the other hand, maybe those are things we ought to address sometime,
so perhaps we don't want to be designing the old limitation into a new
feature.

A possibly crazy idea: instead of having a MaxBackends-sized array, how
about some smaller array of insert-done-pointer-updating backends (a
couple dozen or so), and if it's full, the next one has to sleep a bit
until one of them becomes available. We could protect this with a
PGSemaphore having as many counts as items are in the array.

--
Álvaro Herrera <alvherre(at)commandprompt(dot)com>
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Vinicius Abrahao 2011-06-09 15:14:19 Re: Core Extensions relocation
Previous Message Kevin Grittner 2011-06-09 15:09:23 Re: could not truncate directory "pg_serial": apparent wraparound