Re: Fast data, slow data

From: Marti Raudsepp <marti(at)juffo(dot)org>
To: Shaun Thomas <sthomas(at)optionshouse(dot)com>
Cc: Tim Uckun <timuckun(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Fast data, slow data
Date: 2014-06-26 15:47:47
Message-ID: CABRT9RCprPwRXiYrrqU8S=LOot2NPAUhHxTB=5_zHnn+MUx3eg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Jun 26, 2014 at 5:49 PM, Shaun Thomas <sthomas(at)optionshouse(dot)com> wrote:
> Then you create a job that runs however often you want, and all that job
> does, is move old rows from my_table, to my_table_stable. Like so:
>
> BEGIN;
> INSERT INTO my_table_stable
> SELECT * FROM ONLY my_table
> WHERE date_col >= now() - INTERVAL '15 minutes';
> DELETE FROM ONLY my_table
> WHERE date_col >= now() - INTERVAL '15 minutes';
> COMMIT;

This deserves a caveat, in the default "read committed" isolation
level, this example can delete more rows that it inserts; even if your
date_col never changes and you don't have transactions running for
that long, it's best not to use this pattern. You could change the
isolation using SET TRANSACTION, or much better, use wCTE to solve
this atomically:

WITH deleted AS (
DELETE FROM ONLY my_table
WHERE date_col >= now() - INTERVAL '15 minutes'
RETURNING *
)
INSERT INTO my_table_stable
SELECT * FROM deleted;

Regards,
Marti

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Shaun Thomas 2014-06-26 16:05:47 Re: Fast data, slow data
Previous Message Steve Crawford 2014-06-26 15:45:37 Re: Fast data, slow data