On Mon, Feb 27, 2012 at 1:31 PM, Stefan Keller <sfkeller(at)gmail(dot)com> wrote:
> Hi Scott
> 2012/2/26 Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>:
>> On Sun, Feb 26, 2012 at 1:11 PM, Stefan Keller <sfkeller(at)gmail(dot)com> wrote:
>>> So to me the bottom line is, that PG already has reduced overhead at
>>> least for issue #2 and perhaps for #4.
>>> Remain issues of in-memory optimization (#2) and replication (#3)
>>> together with High Availability to be investigated in PG.
>> Yeah, the real "problem" pg has to deal with is that it writes to
>> disk, and expects that to provide durability, while voltdb (Mike's db
>> project) writes to multiple machines in memory and expects that to be
>> durable. No way a disk subsystem is gonna compete with an in memory
>> cluster for performance.
> That's the point where I'd like to ask for ideas on how to extend PG
> to manage "in-memory tables"!
> To me it's obvious that memory becomes cheaper and cheaper while PG
> still is designed with low memory in mind.
> In my particular scenario I even can set durability aside since I
> write once and read 1000 times. My main problem is heavy geometry
> calculations on geospatial data (like ST_Relate or ST_Intersection
> fns) which I expect to be run close to the data an in-memory. I don't
> want PG to let table rows be pushed to disk just because to free
> memory before hand (because of the "low memory assumption").
I would imagine unlogged tables to a tablespace mounted in memory
would get you close.
In response to
pgsql-general by date
|Next:||From: Tim Uckun||Date: 2012-02-29 03:25:24|
|Subject: Partial matches on full text searches.|
|Previous:||From: Dave Vitek||Date: 2012-02-29 00:41:39|
|Subject: Re: strategies for dealing with frequently updated tables|