From: | "codeWarrior" <gpatnude(at)hotmail(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..? |
Date: | 2008-01-08 15:45:02 |
Message-ID: | fm028v$106a$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
How about using a trigger to call a stored procedure ? [ON INSERT to
user_item_history DO ...]
and have your stored procedure count the records for that user and delete
the oldest record if necessary...
IF (SELECT COUNT(*) WHERE user_id = NEW.user_id) >= 50 THEN
-- DELETE THE OLDEST RECORD....
END IF;
RETURN NEW.*
""Jamie Tufnell"" <diesql(at)googlemail(dot)com> wrote in message
news:b0a4f3350801080624k2088c96aq21c511873959aa87(at)mail(dot)gmail(dot)com(dot)(dot)(dot)
> Hi,
>
> I have a table that stores per-user histories of recently viewed items
> and I'd like to limit the amount of history items to <= 50 per user.
> I'm considering doing this with a query run from cron every so often
> but I'm not happy with what I've come up with so far, and since it's a
> quite active table I thought I'd ask here to see if there's a more
> efficient way.
>
> Right now the table structure is as follows...
>
> user_item_history: id (PK), user_id (FK), item_id (FK), timestamp
>
> For user_ids that have more than 50 rows, I want to keep the most
> recent 50 and delete the rest.
>
> The most obvious way of doing this for me is:
>
> --
> -- Get the user_ids with 50 or more history entries like this
> --
> SELECT user_id, count(*)
> FROM user_scene_history
> GROUP BY user_id
> HAVING count(*) > 50;
>
> --
> -- Then iterate the ids above (_user_id)
> --
> DELETE FROM user_scene_history
> WHERE user_id = _user_id AND id NOT IN (
> SELECT id FROM user_scene_history
> WHERE user_id = _user_id
> ORDER BY timestamp DESC
> LIMIT 50);
>
> I've left out the simple logic tying the above two queries together
> for clarity..
>
> I haven't actually tested this but while I assume it would work I
> imagine there is a neater and possibly more efficient way of attacking
> this. I'm also open to different approaches of limiting the user's
> history too ... perhaps with table constraints so they can simply
> never exceed 50 entries? But I'm not sure how to do this..
>
> Any help would be greatly appreciated..
>
> Thanks,
> Jamie
>
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
> choose an index scan if your joining column's datatypes do not
> match
>
From | Date | Subject | |
---|---|---|---|
Next Message | Erik Jones | 2008-01-08 16:30:28 | Re: How to keep at-most N rows per group? periodic DELETEs or constraints or..? |
Previous Message | Jamie Tufnell | 2008-01-08 14:24:22 | How to keep at-most N rows per group? periodic DELETEs or constraints or..? |