Re: Possible explanations for catastrophic performance deterioration?

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Carlos Moreno" <moreno_pg(at)mochima(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Possible explanations for catastrophic performance deterioration?
Date: 2007-09-23 23:05:31
Message-ID: 874phlugn8.fsf@oxford.xeocode.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Carlos Moreno" <moreno_pg(at)mochima(dot)com> writes:

> I'm now thinking that the problem with my logic is that the system does
> not keep anything in memory (or not all tuples, in any case), since it
> is only counting, so it does not *have to* keep them

That's really not how it works. When Postgres talks to the OS they're just
bits. There's no cache of rows or values or anything higher level than bits.
Neither the OS's filesystem cache nor the Postgres shared memory knows the
difference between live or dead rows or even pages that don't contain any
rows.

> and since the total amount of reading from the disk exceeds the amount of
> physical memory, then the valid tuples are "pushed out" of memory.

That's right.

--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Ow Mun Heng 2007-09-24 06:46:16 Re: REPOST: Nested loops row estimates always too high
Previous Message Jonah H. Harris 2007-09-23 22:29:02 Re: Possible explanations for catastrophic performance deterioration?