kevin kempter wrote:
> I'm expecting 9,961,914 rows returned. Each row in the big table should
> have a corresponding key in the smaller tale, I want to basically
> "expand" the big table column list by one, via adding the appropriate
> key from the smaller table for each row in the big table. It's not a
> cartesion product join.
Didn't seem likely, to be honest.
What happens if you try the query as a cursor, perhaps with an order-by
on customer_id or something to encourage index use? Do you ever get a
first row back?
In fact, what happens if you slap an index over all your join columns on
xsegment_dim? With 7,000 rows that should make it a cheap test.
In response to
pgsql-performance by date
|Next:||From: Richard Huxton||Date: 2008-05-16 08:18:12|
|Subject: Re: Join runs for > 10 hours and then fills up >1.3TB of
|Previous:||From: kevin kempter||Date: 2008-05-16 08:00:41|
|Subject: Re: Join runs for > 10 hours and then fills up >1.3TB of disk space|