|Subject:||PostgreSQL as a local in-memory cache|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
We have a fairly unique need for a local, in-memory cache. This will
store data aggregated from other sources. Generating the data only
takes a few minutes, and it is updated often. There will be some
fairly expensive queries of arbitrary complexity run at a fairly high
rate. We're looking for high concurrency and reasonable performance
The entire data set is roughly 20 MB in size. We've tried Carbonado in
front of SleepycatJE only to discover that it chokes at a fairly low
concurrency and that Carbonado's rule-based optimizer is wholly
insufficient for our needs. We've also tried Carbonado's Map
Repository which suffers the same problems.
I've since moved the backend database to a local PostgreSQL instance
hoping to take advantage of PostgreSQL's superior performance at high
concurrency. Of course, at the default settings, it performs quite
poorly compares to the Map Repository and Sleepycat JE.
My question is how can I configure the database to run as quickly as
possible if I don't care about data consistency or durability? That
is, the data is updated so often and it can be reproduced fairly
rapidly so that if there is a server crash or random particles from
space mess up memory we'd just restart the machine and move on.
I've never configured PostgreSQL to work like this and I thought maybe
someone here had some ideas on a good approach to this.
|Next Message||Eliot Gable||2010-06-15 03:21:30||B-Heaps|
|Previous Message||Greg Smith||2010-06-15 02:06:49||Re: requested shared memory size overflows size_t|