Re: extremly low memory usage

From: John A Meinel <john(at)arbash-meinel(dot)com>
To: Ron <rjpeace(at)earthlink(dot)net>
Cc: Jeremiah Jahn <jeremiah(at)cs(dot)earlham(dot)edu>, postgres performance <pgsql-performance(at)postgresql(dot)org>
Subject: Re: extremly low memory usage
Date: 2005-08-19 19:23:02
Message-ID: 43063196.9000001@arbash-meinel.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Ron wrote:
> At 01:18 PM 8/19/2005, John A Meinel wrote:
>
>> Jeremiah Jahn wrote:
>> > Sorry about the formatting.
>> >
>> > On Thu, 2005-08-18 at 12:55 -0500, John Arbash Meinel wrote:
>> >
>> >>Jeremiah Jahn wrote:
>> >>
>> >>
>>
>> ...
>>
>> >>The expensive parts are the 4915 lookups into the litigant_details
>> (each
>> >>one takes approx 4ms for a total of ~20s).
>> >>And then you do it again on case_data (average 3ms each * 4906 loops =
>> >>~15s).
>> >
>> > Is there some way to avoid this?
>> >
>>
>> Well, in general, 3ms for a single lookup seems really long. Maybe your
>> index is bloated by not vacuuming often enough. Do you tend to get a lot
>> of updates to litigant_details?
>
>
> Given that the average access time for a 15Krpm HD is in the 5.5-6ms
> range (7.5-8ms for a 10Krpm HD), having an average of 3ms for a single
> lookup implies that ~1/2 (the 15Krpm case) or ~1/3 (the 10Krpm case)
> table accesses is requiring a seek.
>

Well, from what he has said, the total indexes are < 1GB and he has 6GB
of ram. So everything should fit. Not to mention he is only accessing
5000/several million rows.

> This implies a poor match between physical layout and access pattern.

This seems to be the case. But since this is not the only query, it may
be that other access patterns are more important to optimize for.

>
> If I understand correctly, the table should not be very fragmented given
> that this is a reasonably freshly loaded DB? That implies that the
> fields being looked up are not well sorted in the table compared to the
> query pattern.
>
> If the entire table could fit in RAM, this would be far less of a
> consideration. Failing that, the physical HD layout has to be improved
> or the query pattern has to be changed to reduce seeks.
>
>

...

>> After CLUSTER, the current data will stay clustered, but new data will
>> not, so you have to continually CLUSTER, the same way that you might
>> VACUUM. *However*, IIRC CLUSTER grabs an Exclusive lock, so it is as
>> expensive as a VACUUM FULL. Be aware of this, but it might vastly
>> improve your performance, so it would be worth it.
>
>
> CLUSTER can be a very large maintenance overhead/problem if the table(s)
> in question actually need to be "continually" re CLUSTER ed.
>
> If there is no better solution available, then you do what you have to,
> but it feels like there should be a better answer here.
>
> Perhaps the DB schema needs examining to see if it matches up well with
> its real usage?
>
> Ron Peacetree
>

I certainly agree that CLUSTER is expensive, and is an on-going
maintenance issue. If it is the normal access pattern, though, it may be
worth it.

I also wonder, though, if his table is properly normalized. Which, as
you mentioned, might lead to improved access patterns.

John
=:->

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Alex Turner 2005-08-19 19:31:58 Re: sustained update load of 1-2k/sec
Previous Message Ron 2005-08-19 18:42:43 Re: extremly low memory usage