Re: Working on huge RAM based datasets

From: "Merlin Moncure" <merlin(dot)moncure(at)rcsonline(dot)com>
To: "Andy Ballingall" <andy_ballingall(at)bigfoot(dot)com>
Cc: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Working on huge RAM based datasets
Date: 2004-07-12 14:23:06
Message-ID: 6EE64EF3AB31D5448D0007DD34EEB34101AECA@Herge.rcsinc.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Andy wrote:
> Whether the OS caches the data or PG does, you still want it cached.
If
> your
> sorting backends gobble up the pages that otherwise would be filled
with
> the
> database buffers, then your postmaster will crawl, as it'll *really*
have
> to
> wait for stuff from disk. In my scenario, you'd spec the machine so
that
> there would be plenty of memory for *everything*.

That's the whole point: memory is a limited resource. If pg is
crawling, then the problem is simple: you need more memory. The
question is: is it postgresql's responsibility to manage that resource?
Pg is a data management tool, not a memory management tool. The same
'let's manage everything' argument also frequently gets brought up wrt
file i/o, because people assume the o/s sucks at file management. In
reality, they are quite good, and through use of the generic interface
the administrator is free to choose a file system that best suits the
needs of the application.

At some point, hard disks will be replaced by solid state memory
technologies...do you really want to recode your memory manager when
this happens because all your old assumptions are no longer correct?

Merlin

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Rod Taylor 2004-07-12 14:38:08 Re: Working on huge RAM based datasets
Previous Message Merlin Moncure 2004-07-12 14:05:06 Re: Working on huge RAM based datasets