On Tue, Feb 28, 2012 at 2:41 PM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> On Tue, Feb 28, 2012 at 5:48 PM, Stefan Keller <sfkeller(at)gmail(dot)com> wrote:
>> P.S. And yes, the database is aka 'read-only' and truncated and
>> re-populated from scratch every night. fsync is off so I don't care
>> about ACID. After the indexes on name, hstore and geometry are
>> generated I do a VACUUM FULL FREEZE. The current installation is a
>> virtual machine with 4GB memory and the filesystem is "read/write".
>> The future machine will be a pizza box with 72GB memory.
> I don't get this. Something's wrong.
> In the OP, you say "There is enough main memory to hold all table
> contents.". I'm assuming, there you refer to your current system, with
> 4GB memory.
> So your data is less than 4GB, but then you'll be throwing a 72GB
> server? It's either tremendous overkill, or your data simply isn't
> less than 4GB.
> It's quite possible the vacuum full is thrashing your disk cache due
> to maintainance_work_mem. You can overcome this issue with the tar
> trick, which is more easily performed as:
> tar cf /dev/null $PG_DATA/base
But on many implementations, that will not work. tar detects the
output is going to the bit bucket, and so doesn't bother to actually
read the data.
> Another option is to issue a simple vacuum after the vacuum full.
> Simple vacuum will just scan the tables and indices, I'm hoping doing
> nothing since the vacuum full will have cleaned everything already,
> but loading everything both in the OS cache and into shared_buffers.
Doesn't it use a ring buffer strategy, so it would load to OS, but
probably not to shared_buffers?
In response to
pgsql-performance by date
|Next:||From: Claudio Freire||Date: 2012-02-29 15:18:12|
|Subject: Re: PG as in-memory db? How to warm up and re-populate
buffers? How to read in all tuples into memory?|
|Previous:||From: Anssi Kääriäinen||Date: 2012-02-29 10:20:15|
|Subject: Large insert and delete batches|