Skip site navigation (1) Skip section navigation (2)

Re: slow joining very large table to smaller ones

From: John A Meinel <john(at)arbash-meinel(dot)com>
To: Dan Harris <fbsd(at)drivefaster(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: slow joining very large table to smaller ones
Date: 2005-07-14 15:42:29
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Dan Harris wrote:
> I'm trying to improve the speed of this query:
> explain select recordtext from eventactivity inner join ( select
> incidentid from k_r where id = 94 ) a using ( incidentid ) inner join  (
> select incidentid from k_b where id = 107 ) b using ( incidentid );

You might try giving it a little bit more freedom with:

SELECT recordtext FROM eventactivity, k_r, k_b
 WHERE eventactivity.incidentid = k_r.incidentid
   AND eventactivity.incidentid = k_b.incidentid
   AND = 94
   AND = 107
-- AND k_r.incidentid = k_b.incidentid

I'm pretty sure that would give identical results, just let the planner
have a little bit more freedom about how it does it.
Also the last line is commented out, because I think it is redundant.

You might also try:
SELECT recordtext
  FROM eventactivity JOIN k_r USING (incidentid)
  JOIN k_b USING (incidentid)
 WHERE = 94
   AND = 107

Also, if possible give us the EXPLAIN ANALYZE so that we know if the
planner is making accurate estimates. (You might send an EXPLAIN while
waiting for the EXPLAIN ANALYZE to finish)

You can also try disabling merge joins, and see how that changes things.

>                                                   QUERY PLAN
> ------------------------------------------------------------------------
> --------------------------------------
> Merge Join  (cost=2747.29..4249364.96 rows=11968693 width=35)
>    Merge Cond: (("outer".incidentid)::text = "inner"."?column2?")
>    ->  Merge Join  (cost=1349.56..4230052.73 rows=4413563 width=117)
>          Merge Cond: (("outer".incidentid)::text = "inner"."?column2?")
>          ->  Index Scan using eventactivity1 on eventactivity
> (cost=0.00..4051200.28 rows=44519781 width=49)
>          ->  Sort  (cost=1349.56..1350.85 rows=517 width=68)
>                Sort Key: (k_b.incidentid)::text
>                ->  Index Scan using k_b_idx on k_b   (cost=0.00..1326.26
> rows=517 width=68)
>                      Index Cond: (id = 107)
>    ->  Sort  (cost=1397.73..1399.09 rows=542 width=68)
>          Sort Key: (k_r.incidentid)::text
>          ->  Index Scan using k_r_idx on k_r  (cost=0.00..1373.12
> rows=542 width=68)
>                Index Cond: (id = 94)
> (13 rows)
> There are many millions of rows in eventactivity.  There are a few
> ten-thousand rows in k_r and k_b.  There is an index on 'incidentid'  in
> all three tables.  There should only be less than 100 rows matched  in
> k_r and k_b total.  That part on its own is very very fast.  But,  it
> should have those 100 or so incidentids extracted in under a  second and
> then go into eventactivity AFTER doing that.  At least,  that's my
> intention to make this fast.

Well, postgres is estimating around 500 rows each, is that way off? Try
just doing:
EXPLAIN ANALYZE SELECT incidentid FROM k_b WHERE id = 107;

And see if postgres estimates the number of rows properly.

I assume you have recently VACUUM ANALYZEd, which means you might need
to update the statistics target (ALTER TABLE k_b ALTER COLUMN
incidientid SET STATISTICS 100) default is IIRC 10, ranges from 1-1000,
higher is more accurate, but makes ANALYZE slower.

> Right now, it looks like pg is trying to sort the entire  eventactivity
> table for the merge join which is taking several  minutes to do.  Can I
> rephrase this so that it does the searching  through k_r and k_b FIRST
> and then go into eventactivity using the  index on incidentid?  It seems
> like that shouldn't be too hard to  make fast but my SQL query skills
> are only average.

To me, it looks like it is doing an index scan (on through k_b
first, sorting the results by incidentid, then merge joining that with

I'm guessing you actually want it to merge k_b and k_r to get extra
selectivity before joining against eventactivity.
I think my alternate forms would let postgres realize this. But if not,
you could try:

SELECT recordtext FROM eventactivity
 JOIN (SELECT incidentid FROM k_r JOIN k_b USING (incidentid)
	WHERE = 94 AND = 107)
USING (incidentid);

I don't know how selective your keys are, but one of these queries
should probably structure it better for the planner. It depends a lot on
how selective your query is.
If you have 100M rows, the above query looks like it expects k_r to
restrict it to 44M rows, and k_r + k_b down to 11M rows, which really
should be a seq scan (> 10% of the rows = seq scan). But if you are
saying the selectivity is mis-estimated it could be different.

> Thanks
> -Dan
> ---------------------------(end of broadcast)---------------------------
> TIP 3: Have you checked our extensive FAQ?

In response to


pgsql-performance by date

Next:From: Alvaro HerreraDate: 2005-07-14 15:47:51
Subject: Re: Quad Opteron stuck in the mud
Previous:From: Jeffrey W. BakerDate: 2005-07-14 13:27:24
Subject: Re: JFS fastest filesystem for PostgreSQL?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group