On Wed, Mar 16, 2011 at 4:45 AM, Uwe Bartels <uwe(dot)bartels(at)gmail(dot)com> wrote:
> I'm having trouble with some sql statements which use an expression with
> many columns and distinct in the column list of the select.
> select distinct col1,col2,.....col20,col21
> from table1 left join table2 on <join condition>,...
> <other expressions>;
> The negative result is a big sort with teporary files.
> -> Sort (cost=5813649.93..5853067.63 rows=15767078 width=80)
> (actual time=79027.079..81556.059 rows=12076838 loops=1)
> Sort Method: external sort Disk: 1086096kB
> By the way - for this query I have a work_mem of 1 GB - so raising this
> further is not generally possible - also not for one special command, due to
> How do I get around this?
Hmm. It seems to me that there's no way to work out the distinct
values without either sorting or hashing the output, which will
necessarily be slow if you have a lot of data.
> I have one idea and like to know if there any other approaches or an even
> known better solution to that problem. By using group by I don't need the
> big sort for the distinct - I reduce it (theoreticly) to the key columns.
> select <list of key columns>,<non key column>
> from tables1left join table2 on <join condition>,...
> <other conditions>
> group by <list of key columns>
You might try SELECT DISTINCT ON (key columns) <key columns> <non-key
columns> FROM ...
> Another question would be what's the aggregate function which needs as less
> as possible resources (time).
Not sure I follow this part.
The Enterprise PostgreSQL Company
In response to
pgsql-performance by date
|Next:||From: Robert Haas||Date: 2011-04-18 16:24:03|
|Subject: Re: Custom operator class costs|
|Previous:||From: Robert Haas||Date: 2011-04-18 16:14:59|
|Subject: Re: Index use difference betweer LIKE, LIKE ANY?|