The example was fictitious, but the structure is the same as the real problem.
The stored procedure calls another recursive stored procedure that can take a long time to run, usually about 3-4 seconds. Not bad for a handful of records, but it is now operating on a table with over 40,000 records.
Without the stored procedure call (the fictitious "get_jobs" call), it returns about 50 records. I can live with the 2-3 minutes it'll take to run the stored proc for those, but not the 40,000+. This is why I tried to segregate it the way I did.
From: Grzegorz Jaśkiewicz [mailto:gryzman(at)gmail(dot)com]
Sent: Monday, April 27, 2009 5:04 PM
To: Gauthier, Dave
Subject: Re: [GENERAL] Query organization question
> exists (select 'found_it' from get_jobs(x.name) j where j.job =
What does this function do ?
If it only runs on the tables, than simple join will do it pretty fast.
also, keeping job as integer, if table is large will save you some
space, make index lookup faster, and generally make everything faster.
Subselects always perform poor, so please try writing that query as
join first. Postgresql is capable of reordering, and choosing right
approach for query, this isn't mysql - you don't have try to outsmart
In response to
pgsql-general by date
|Next:||From: Richard Broersma||Date: 2009-04-27 21:24:40|
|Subject: Re: triggers and execute...|
|Previous:||From: Grzegorz Jaśkiewicz||Date: 2009-04-27 21:03:51|
|Subject: Re: Query organization question|