Skip site navigation (1) Skip section navigation (2)

Re: Possible explanations for catastrophic performance deterioration?

From: Gregory Stark <stark(at)enterprisedb(dot)com>
To: "Carlos Moreno" <moreno_pg(at)mochima(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Possible explanations for catastrophic performance deterioration?
Date: 2007-09-23 23:05:31
Message-ID: 874phlugn8.fsf@oxford.xeocode.com (view raw or flat)
Thread:
Lists: pgsql-performance
"Carlos Moreno" <moreno_pg(at)mochima(dot)com> writes:

> I'm now thinking that the problem with my logic is that the system does
> not keep anything in memory (or not all tuples, in any case), since it
> is only counting, so it does not *have to* keep them

That's really not how it works. When Postgres talks to the OS they're just
bits. There's no cache of rows or values or anything higher level than bits.
Neither the OS's filesystem cache nor the Postgres shared memory knows the
difference between live or dead rows or even pages that don't contain any
rows.

>  and since the total amount of reading from the disk exceeds the amount of
> physical memory, then the valid tuples are "pushed out" of memory.

That's right.

-- 
  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

In response to

pgsql-performance by date

Next:From: Ow Mun HengDate: 2007-09-24 06:46:16
Subject: Re: REPOST: Nested loops row estimates always too high
Previous:From: Jonah H. HarrisDate: 2007-09-23 22:29:02
Subject: Re: Possible explanations for catastrophic performance deterioration?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group