2010/5/28 Konrad Garus <konrad(dot)garus(at)gmail(dot)com>:
> 2010/5/27 Cédric Villemain <cedric(dot)villemain(dot)debian(at)gmail(dot)com>:
>> Exactly. And the time to browse depend on the number of blocks already
>> in core memory.
>> I am interested by tests results and benchmarks if you are going to do some :)
> I am still thinking whether I want to do it on this prod machine.
> Maybe on something less critical first (but still with a good amount
> of memory mapped by page buffers).
> What system have you tested it on? Has it ever run on a few-gig system? :-)
databases up to 300GB for the stats purpose.
The snapshot/restore was done for bases around 40-50GB but with only
16GB of RAM.
I really thing some improvments are posible before using it in
production, even if it should work well as it is.
At least something to remove the orphan snapshot files (in case of
drop table, or truncate). And probably increase the quality of the
code around the prefetch.(better handling of
effective_io_concurrency...the prefetch is linerar but blocks requests
If you are able to test/benchs on a pre-production env, do it :)
Cédric Villemain 2ndQuadrant
http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support
In response to
pgsql-performance by date
|Next:||From: Joachim Worringen||Date: 2010-05-28 11:04:13|
|Subject: Re: performance of temporary vs. regular tables|
|Previous:||From: Konrad Garus||Date: 2010-05-28 07:57:40|
|Subject: Re: shared_buffers advice|