Re: DB cache size strategies

From: "NTPT" <ntpt(at)centrum(dot)cz>
To: "Ed L(dot)" <pgsql(at)bluepolka(dot)net>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: "Martijn van Oosterhout" <kleptog(at)svana(dot)org>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: DB cache size strategies
Date: 2004-02-11 09:42:29
Message-ID: 001301c3f083$5e43f5c0$d300a8c0@webpropag.cz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I make some test on my production system while maintance time so no user use the system.It run on old box with k6-2/500 and 3* 128mb DIMMs pg 7.3.2

I have table with around 100 000 lines like this : owner int8,id int8, content text , index on column owner (this index is used by text query).

i manage a test query select * from table where owner='10000' order by id oddset 100 limit 50 . There is around 10 000 lines before limit and offset apply. i run explain analyze and got this

Case 1: (no memory to Pg )
sort_mem = 64Kb
effective_cache_size = 64 (8k blocks - Around 512 kb )

Take 3300 ms

Case 2: (give a lot of ram to postgres)
sort_mem = 64000 Kb
effective_cache_size = 8192 (8k blocks)

Swapping occured , execution take 5400 ms

Case 3:
sort_mem = 16000 Kb
effective_cache_size = 512 (8k blocks - Around 4 Mb )

Take 1200 ms. The best result.

Case 4:
sort_mem = 16000 Kb
effective_cache_size = 64 (8k blocks - Around 512 kb )

Take 1900 ms.. In this case i try to increase effective_cache_size step by step 64,128,256,512,1024 but increase effective_cache_size up from 512 have no dramatic impact on performance.

I test shared_buffer settings, between 2*max_connection ie 128*2 to 1024 but have no dramatic impact on performance, but in my tests no focus on this

It seems that :

1: Settings memory limits too high, whnen machine start using swap space is WROSE then give postgres as low memory as possible.

2: settings of sort_mem have bigger impact on performance then settings of effective_cache_size , if db cache hold at least hunderds of disk pages. More than 1000 disk pages in effective cache size have no sense.

I can not made reliable simulations with this database on real user load :(, so with more concurent user result may be different. May be a good "stress test" suite for postgresql database is needed for future testings.

----- Původní zpráva -----
Od: "Ed L." <pgsql(at)bluepolka(dot)net>
Komu: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Kopie: "Martijn van Oosterhout" <kleptog(at)svana(dot)org>; <pgsql-general(at)postgresql(dot)org>
Odesláno: 11. února 2004 7:01
Předmět: Re: [GENERAL] DB cache size strategies

> On Tuesday February 10 2004 10:26, Tom Lane wrote:
> >
> > Giving PG half the RAM is counterproductive no matter what --- that
> > pretty much guarantees that every page that's in RAM will be in RAM
> > twice, once in PG buffers and once in kernel buffers. The two
> > reasonable schools of thought are (1) to make PG buffers relatively
> > small and let the kernel do the bulk of the buffering, or (2) to give
> > PG most of the RAM and expect it to do the bulk of the buffering.
> >
> > Past experience has been that theory (1) generally wins. Jan's recent
> > work on ARC buffering may make things more interesting, though.
>
> So, pursuing theory (1) in 'the time before ARC', assuming you have a
> dedicated box with little or no non-db competition for resources, why give
> PG anymore than is absolutely required (max_connections*2 or 16)? Why not
> just let the kernel reign as completely as possible? Is that what you mean
> by "relatively small"?
>
> TIA.
>
>
> ---------------------------(end of broadcast)---------------------------
> TIP 6: Have you searched our list archives?
>
> http://archives.postgresql.org
>
>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Matthew Lunnon 2004-02-11 09:43:41 Re: Join query on 1M row table slow
Previous Message kdyke 2004-02-11 06:55:35 Re: book for postgresql