From: | Istvan Endredy <istvan(dot)endredy(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, Ants Aasma <ants(at)cybertec(dot)at>, tivv00(at)gmail(dot)com |
Subject: | Re: bad planning with 75% effective_cache_size |
Date: | 2012-04-19 14:15:09 |
Message-ID: | CAEcxehoTMJKNDVPPWfDui2E+KU+BNOBbUUcPKc6KzZiCdWb8PA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi everybody,
thanks for the so many responses. :)
> On Thu, Apr 19, 2012 at 3:44 AM, Josh Berkus <josh(at)agliodbs(dot)com>
>>> 7500ms
>>> http://explain.depesz.com/s/
>> This plan seems very odd -- doing individual index lookups on 2.8m rows
>> is not standard planner behavior. Can you confirm that all of your
>> other query cost parameters are the defaults?
Josh: i confirm the non-default values:
i've ran this query: http://wiki.postgresql.org/wiki/Server_Configuration
its result:
"version";"PostgreSQL 9.1.3 on x86_64-unknown-linux-gnu, compiled by
gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46), 64-bit"
"bytea_output";"escape"
"client_encoding";"UNICODE"
"client_min_messages";"log"
"effective_cache_size";"6GB"
"lc_collate";"en_US.UTF-8"
"lc_ctype";"en_US.UTF-8"
"listen_addresses";"*"
"log_directory";"/var/log/postgres"
"log_duration";"on"
"log_error_verbosity";"default"
"log_filename";"postgresql-%Y-%m-%d.log"
"log_line_prefix";"%t %u(at)%h %d %p %i "
"log_lock_waits";"on"
"log_min_duration_statement";"0"
"log_min_error_statement";"warning"
"log_min_messages";"warning"
"log_rotation_age";"15d"
"log_statement";"all"
"logging_collector";"on"
"max_connections";"100"
"max_stack_depth";"2MB"
"port";"5432"
"server_encoding";"UTF8"
"shared_buffers";"6024MB"
"TimeZone";"Europe/Budapest"
"wal_buffers";"16MB"
Ants:
> I'm not sure what to do about unique node overestimation, but I think
> it could be coaxed to be less optimistic about the limit by adding an
> optimization barrier and some selectivity decreasing clauses between
> the limit and the rest of the query:
>
> select * from (
> select distinct product_code from product p_
> inner join product_parent par_ on p_.parent_id=par_.id
> where par_.parent_name like 'aa%'
> offset 0 -- optimization barrier
> ) as x
> where product_code = product_code -- reduce selectivity estimate by 200x
> limit 2;
Its planning: http://explain.depesz.com/s/eF3h
1700ms
Віталій:
> How about
>
> with par_ as (select * from product_parent where parent_name like 'aa%' )
> select distinct product_code from product p_
> inner join par_ on p_.parent_id=par_.id
> limit 2
Its planning: http://explain.depesz.com/s/YIS
All suggestions are welcome,
Istvan
From | Date | Subject | |
---|---|---|---|
Next Message | Jan Nielsen | 2012-04-24 02:56:47 | Configuration Recommendations |
Previous Message | Ants Aasma | 2012-04-19 10:32:50 | Re: bad planning with 75% effective_cache_size |