From: | Richard Broersma <richard(dot)broersma(at)gmail(dot)com> |
---|---|
To: | Tom <tom(at)cstcomposites(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: SELECTing every Nth record for better performance |
Date: | 2009-12-04 05:50:57 |
Message-ID: | 396486430912032150t447a315p55393e84f47fc276@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Dec 3, 2009 at 9:26 PM, Tom <tom(at)cstcomposites(dot)com> wrote:
> I
> want run a query that skips every nth record and returns a managable
> dataset that still gives a correct overview of the data without
> slowing the programme down. Is there an easy way to do this that I
> have overlooked? I looked at:
I've played with datalogging. It was very easy to find nth records
when using date_trunc() on a timestamp. The only minor problem with
data_trunc was that I couldn't create arbitrary granularity. For
example it is easy to date_trunc() on an year, month, week, day, hour
or a minute but I wanted 5, 10 and 15 minute increments. I bet there
could be a solution to this, but I never looked into it.
To improve the select performance, I created functional indexes using
different data_trunc() granularities.
--
Regards,
Richard Broersma Jr.
Visit the Los Angeles PostgreSQL Users Group (LAPUG)
http://pugs.postgresql.org/lapug
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-12-04 06:01:06 | Re: Installing PL/pgSQL by default |
Previous Message | Tom | 2009-12-04 05:26:22 | SELECTing every Nth record for better performance |