On 9/23/07, Carlos Moreno <moreno_pg(at)mochima(dot)com> wrote:
> Yes, that part I understand --- I think I now know what the error is in
> my logic. I was thinking as follows: We read 2GB of which 1900MB are
> dead tuples. But then, once they're read, the system will only keep
> in memory the 100MB that are valid tuples.
Yes, this is wrong.
> I'm now thinking that the problem with my logic is that the system does
> not keep anything in memory (or not all tuples, in any case), since it
> is only counting, so it does not *have to* keep them, and since the
> total amount of reading from the disk exceeds the amount of physical
> memory, then the valid tuples are "pushed out" of memory.
Yes, it does keep some in memory, but not all of it.
> So, the second time I execute the query, it will still need to scan the
> disk (in my mind, the way I was seeing it, the second time I execute
> the "select count(*) from customer", the entire customer table would be
> in memory from the previous time, and that's why I was thinking that
> the bloating would not explain why the second time it is still slow).
Yes, it is still performing additional I/Os and additional CPU work to
read bloated data.
> Am I understanding it right?
Now, I think so.
Jonah H. Harris, Sr. Software Architect | phone: 732.331.1324
EnterpriseDB Corporation | fax: 732.331.1301
499 Thornall Street, 2nd Floor | jonah(dot)harris(at)enterprisedb(dot)com
Edison, NJ 08837 | http://www.enterprisedb.com/
In response to
pgsql-performance by date
|Next:||From: Gregory Stark||Date: 2007-09-23 23:05:31|
|Subject: Re: Possible explanations for catastrophic performance deterioration?|
|Previous:||From: Carlos Moreno||Date: 2007-09-23 21:55:49|
|Subject: Re: Possible explanations for catastrophic performance