Re: Performance problems with DISTINCT ON

From: Scott Carey <scott(at)richrelevance(dot)com>
To: Sgarbossa Domenico <domenico(dot)sgarbossa(at)eniac(dot)it>, Richard Huxton <dev(at)archonet(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance problems with DISTINCT ON
Date: 2009-09-29 17:53:17
Message-ID: C6E7979D.12B97%scott@richrelevance.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Distinct on Postgres 8.1 forces a sort. It may be faster if you restructure
the query to use a group by + order by. But that might not help either,
since the data might not be large enough for a plan that hash aggregates and
then sorts the result to be faster.

An index on (articolo, data_end desc) might help -- but only if the planner
thinks that the index scan is faster. You may have to tweak the cost
parameter for random I/O downward to get it to choose a plan to use that
index -- which will be faster if the index and data are in memory, but will
be slower if it has to go too much to often to disk.

If this query is being done a lot, and concurrently, it sounds like the
application needs some tweaks. The result might be application cacheable
for short intervals of time, for example. Or, if only small bits of the
table are updated, a timestamp column and filter to select only the parts
updated can allow a client application to merge the updates with a previous
full result client side.

On 9/29/09 5:44 AM, "Sgarbossa Domenico" <domenico(dot)sgarbossa(at)eniac(dot)it>
wrote:

> Subject: Re: [PERFORM] Performance problems with DISTINCT ON
>
>
>> Sgarbossa Domenico wrote:
>>> I need to retrieve the most recent prices per products from a price list
>>> table:
>>
>>> select distinct on (articolo) articolo,data_ent,prezzo from
>>> listini_anagrafici order by articolo, data_ent desc
>>>
>>> but it seems that this query runs slowly... about 5/6 seconds.
>>> the table contains more or less 500K records, PostgreSQL version is
>>> 8.1.11 and the server has 4gb of RAM entirely dedicate to the db.
>>
>>> 'Unique (cost=73893.89..76551.25 rows=88312 width=24) (actual
>>> time=4022.578..5076.206 rows=193820 loops=1)'
>>> ' -> Sort (cost=73893.89..75222.57 rows=531472 width=24) (actual
>>> time=4022.574..4505.538 rows=531472 loops=1)'
>>> ' Sort Key: articolo, data_ent'
>>> ' -> Seq Scan on listini_anagrafici (cost=0.00..16603.72
>>> rows=531472 width=24) (actual time=0.009..671.797 rows=531472 loops=1)'
>>> 'Total runtime: 5217.452 ms'
>>
>> You've got 531472 rows in the table and the query is going to output
>> 193820 of them. Scanning the whole table is almost certainly the way to
>> go.
>>
>> If the table doesn't change much, you could try running a CLUSTER on the
>> index you've created. That will lock the table while it re-orders the
>> physical layout of the rows based on your index though, so it's no good
>> if the table is updated much.
>>
>> Failing that, you could try issuing "set work_mem = ..." before the
>> query with increasing sizes for work_mem. That might make the sort
>> faster too.
>>
>
> Thank you for the answer,
> I've tried as you suggest but the only things that seems make some
> differences is the work_mem parameter
> This helps to reduce the amount of time about for the half (3 seconds) but
> unfortunately this ain't enough.
> If there are a lot of concurrent request I think it could made the data
> swap to the disk.
> Should I try a different approach to solve this issue?
>
>
>
>
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message mange 2009-09-29 19:13:55 Performance RAID 0
Previous Message Tom Lane 2009-09-29 14:19:10 Re: Using OProfile