Table join performance threshold...

From: Bryan Campbell <bryan(at)wordsandimages(dot)com>
To: <pgsql-novice(at)postgresql(dot)org>
Subject: Table join performance threshold...
Date: 2000-06-16 23:09:21
Message-ID: B57001B0.224E%bryan@wordsandimages.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-novice

Howdy,

I'm a newbie to postgres, and I'm sure I've run into an obvious problem.

I have a database with around 20 or so tables. None of them are very large
(60 rows X around 20 columns in the largest). Actually, the largest is my
master product table with a bunch of ID's to entries in other tables
(product attributes). Pretty standard stuff...

What I want to do is select a row in that table, and then join about 15 or
so tables with corresponding ID-Value relationships.

My join works great... but it's slow. If I back the number of fields in my
SELECT/WHERE query down to 9, it speeds up dramatically (almost
instantaneous). Anything above 9 and it slows to a whopping 8 seconds.

Why would I experience such a dramatic change in response? I'm not doing
anything complex in my query... just your standard:

SELECT parameter_table.field AS some_friendly_name, (more fields...)
FROM master_table
WHERE master_table.parameter_id = parameter_table.id AND
(more joins...)

The parameter tables are very simple 2 column tables (KEY, ATTRIBUTE), none
of them over 40 rows.

Any thoughts? Is my SQL statement bunk? Does it look like I'm hitting a
memory limit? I've been reading quite a bit, but I'm having trouble finding
a lead.

Thanks for helping!!!!

Bryan

Responses

Browse pgsql-novice by date

  From Date Subject
Next Message Carsten Huettl 2000-06-20 18:50:10 Postgres with php3
Previous Message NRonayette 2000-06-15 15:17:17 Re: find the number of row for each tables