From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Michael Viscuso <michael(dot)viscuso(at)getcarbonblack(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Smith <greg(at)2ndQuadrant(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Query optimization using order by and limit |
Date: | 2011-09-22 23:14:56 |
Message-ID: | 20110922231456.GN12765@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Mike,
* Michael Viscuso (michael(dot)viscuso(at)getcarbonblack(dot)com) wrote:
> I spent the better part of the day implementing an application layer
> nested loop and it seems to be working well. Of course it's a little
> slower than a Postgres only solution because it has to pass data back
> and forth for each daily table query until it reaches the limit, but at
> least I don't have "runaway" queries like I was seeing before. That
> should be a pretty good stopgap solution for the time being.
Glad to hear that you were able to get something going which worked for
you.
> I was really hoping there was a Postgres exclusive answer though! :) If
> there are any other suggestions, it's a simple flag in my application to
> query the other way again...
I continue to wonder if some combination of multi-column indexes might
have made the task of finding the 'lowest' record from each of the
tables fast enough that it wouldn't be an issue.
> Thanks for all your help - and I'm still looking to change those
> numerics to bigints, just haven't figured out the best way yet.
Our timestamps are also implemented using 64bit integers and would allow
you to use all the PG date/time functions and operators. Just a
thought.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Viscuso | 2011-09-22 23:21:04 | Re: Query optimization using order by and limit |
Previous Message | Dave Crooke | 2011-09-22 23:03:51 | Re: Optimizing Trigram searches in PG 9.1 |