From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | "Ozer, Pam" <pozer(at)automotive(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow Query |
Date: | 2010-08-27 00:18:54 |
Message-ID: | AANLkTik1ZHb078tdVckvoTw07KvsrFPB60kz_eJ1shPs@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Aug 26, 2010 at 6:03 PM, Ozer, Pam <pozer(at)automotive(dot)com> wrote:
>
> I am new to Postgres and I am trying to understand the Explain Analyze so I can tune the following query. I run the same query using mysql and it takes less than 50ms. I run it on postgres and it takes 10 seconds. I feel like I am missing something very obvious. (VehicleUsed is a big table over 750,000records) and datasetgroupyearmakemodel has 150000 records.
>
> It looks like the cost is highest in the Hash Join on Postalcode. Am I reading this correctly.? I do have indexes on the lower(postalcode) in both tables. Why wouldn’t be using the index?
No, it's spending most of its time here:
> " -> Nested Loop (cost=101.81..37776.78 rows=11887 width=10) (actual time=1.172..9876.586 rows=382528 loops=1)"
Note that it expects 11,887 rows but gets 382k rows.
Try turning up default stats target and running analyze again and see
how it runs.
From | Date | Subject | |
---|---|---|---|
Next Message | Bob Lunney | 2010-08-27 00:19:37 | Re: Slow Query |
Previous Message | Ozer, Pam | 2010-08-27 00:03:27 | Slow Query |