Re: Listen / Notify - what to do when the queue is full

From: Joachim Wieland <joe(at)mcknight(dot)de>
To: Jeff Davis <pgsql(at)j-davis(dot)com>
Cc: Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "Florian G(dot) Pflug" <fgp(at)phlo(dot)org>, Josh Berkus <josh(at)agliodbs(dot)com>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Listen / Notify - what to do when the queue is full
Date: 2009-11-30 13:14:17
Message-ID: dc7b844e0911300514i16c94641w147aea95c0096b31@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Hi Jeff,

the current patch suffers from what Heikki recently spotted: If one
backend is putting notifications in the queue and meanwhile another
backend executes LISTEN and commits, then this listening backend
committed earlier and is supposed to receive the notifications of the
notifying backend - even though its transaction started later.

I have a new version that deals with this problem but I need to clean
it up a bit. I am planning to post it this week.

On Mon, Nov 30, 2009 at 6:15 AM, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
>  * Why don't we read all notifications into backend-local memory at
> every opportunity? It looks like sometimes it's only reading the
> committed ones, and I don't see the advantage of leaving it in the SLRU.

Exactly because of the problem above we cannot do it. Once the
notification is removed from the queue, then no other backend can
execute a LISTEN anymore because there is no way for it to get that
information. Also we'd need to read _all_ notifications, not only the
committed ones because we don't know what our backend will LISTEN to
in the future.

On the other hand, reading uncommitted notifications guarantees that
we can send an unlimited number of notifications (limited by main
memory) and that we don't run into a full queue in this example:

Queue length: 1000
3 notifying backends, 400 notifications to be sent by each backend.

If all of them send their notifications at the same time, we risk that
all three run into a full queue...

We could still preserve that behavior on the cost that we allow LISTEN
to block until the queue is within its limits again.

>  * When the queue is full, the inserter tries to signal the listening
> backends, and tries to make room in the queue.
>  * Backends read the notifications when signaled, or when inserting (in
> case the inserting backend is also the one preventing the queue from
> shrinking).

Exactly, but it doesn't solve the problem described above. :-(

ISTM that we have two options:

a) allow LISTEN to block if the queue is full - NOTIFY will never fail
(but block as well) and will eventually succeed
b) NOTIFY could fail and make the transaction roll back - LISTEN
always succeeds immediately

Again: This is corner-case behavior and only happens after some
hundreds of gigabytes of notifications have been put to the queue and
have not yet been processed by all listening backends. I like a)
better, but b) is easier to implement...

> I haven't looked at everything yet, but this seems like it's in
> reasonable shape from a high level. Joachim, can you clean the patch up,
> include docs, and fix the tests? If so, I'll do a full review.

As soon as everybody is fine with the approach, I will work on the docs patch.

Joachim

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2009-11-30 13:21:10 Re: Block-level CRC checks
Previous Message Glyn Astill 2009-11-30 12:38:58 Re: Feature request: permissions change history for auditing