From: | Chris Travers <chris(dot)travers(at)gmail(dot)com> |
---|---|
To: | Alban Hertroys <dalroi(at)solfertje(dot)student(dot)utwente(dot)nl> |
Cc: | Daniel Farina <daniel(at)heroku(dot)com>, David Johnston <polobo(at)yahoo(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Finding latest record for a number of groups in an INSERT-only table |
Date: | 2011-07-06 01:23:07 |
Message-ID: | CAKt_Zfv0qcbdXxR7LpDuxKzy_thGaYNe9NR4Gjf1WPUdFHWrFg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
It seems to me one solution is to alter your table topology by
partitioning your table by the keys you need to query on, and then
using simple aggregates.
You;d have to set up ON INSERT DO INSTEAD rules, and you might get a
performance hit.....
Another solution might be to break up the query into several pieces,
and running smaller queries aimed at retrieivng individual rows.
This could be done inside a stored proc. Looking into how we did this
with some queries in LedgerSMB.....
Here's a stored procedure we used in LedgerSMB to pull distinct years
from a table with, maybe 10M rows in a timely fashion. Something
similar might be doable for you with modifications of course:
CREATE OR REPLACE FUNCTION date_get_all_years() returns setof INT AS
$$
DECLARE next_record int;
BEGIN
SELECT MIN(EXTRACT ('YEAR' FROM transdate))::INT
INTO next_record
FROM acc_trans;
LOOP
EXIT WHEN next_record IS NULL;
RETURN NEXT next_record;
SELECT MIN(EXTRACT ('YEAR' FROM transdate))::INT AS YEAR
INTO next_record
FROM acc_trans
WHERE EXTRACT ('YEAR' FROM transdate) > next_record;
END LOOP;
END;
$$ language plpgsql;
From | Date | Subject | |
---|---|---|---|
Next Message | Toby Corkindale | 2011-07-06 01:32:28 | Re: out of memory error |
Previous Message | Ivan Sergio Borgonovo | 2011-07-06 00:53:33 | Re: Read MS-SQL data into Postgres via ODBC link? |