Isabelle Therrien <therriei(at)LUB(dot)UMontreal(dot)CA> writes:
> I have a big query, reported below, that is called several times in my
> At least 4 active connections call it at the same time.
> Normally, this query is executed in about 30-50 milliseconds.
> But after a while (depending on how many connections are used, and how
> often the query is called),
> the query is executed in 1000ms, then 2000ms, and it continues to grow
> exponentially. I've already seen it reaching 80 seconds.
Hmm, that's odd. What causes the time to drop back down to milliseconds
--- do you have to restart the whole database, or just run it in a new
backend? Does the amount of memory being used by the backend increase
as the time goes up? What does EXPLAIN show as the query plan for the
query? How large are the tables, and how many tuples are actually
Also, which beta release exactly, and how did you build it (what
Finally, it would be nice to see the full schemas for these tables, to
be sure we're not missing something. You can generate those via
pg_dump -s -t tablename databasename
regards, tom lane
In response to
pgsql-bugs by date
|Next:||From: pgsql-bugs||Date: 2001-03-20 05:15:28|
|Subject: comments on columns aren't displayed by \dd|
|Previous:||From: Isabelle Therrien||Date: 2001-03-19 23:30:13|
|Subject: important decrease of performance using the BETA version in one