Re: Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "Steve Midgley" <public(at)misuse(dot)org>
Cc: pgsql-sql-owner(at)postgresql(dot)org, pgsql-sql(at)postgresql(dot)org
Subject: Re: Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?
Date: 2008-01-09 19:09:20
Message-ID: dcc563d10801091109kf80ce2dh97b8e4a548f5168f@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

On Jan 9, 2008 12:20 PM, Steve Midgley <public(at)misuse(dot)org> wrote:
> This is kludgy but you would have some kind of random number test at
> the start of the trigger - if it evals true once per every ten calls to
> the trigger (say), you'd cut your delete statements execs by about 10x
> and still periodically truncate every set of user rows fairly often. On
> average you'd have ~55 rows per user, never less than 50 and a few
> outliers with 60 or 70 rows before they get trimmed back down to 50..
> Seems more reliable than a cron job, and solves your problem of an ever
> growing table? You could adjust the random number test easily if you
> change your mind of the balance of size of table vs. # of delete
> statements down the road.

And, if you always through a limit 50 on the end of queries that
retrieve data, you could let it grow quite a bit more than 60 or 70...
Say 200. Then you could have it so that the random chopper function
only gets kicked off every 100th or so time.

In response to

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Erik Jones 2008-01-09 20:17:39 Re: Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?
Previous Message Steve Midgley 2008-01-09 18:20:21 Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..?