Skip site navigation (1) Skip section navigation (2)

Re: Shared memory and memory context question

From: "Mark Woodward" <pgsql(at)mohawksoft(dot)com>
To: richard(at)playford(dot)net
Cc: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Shared memory and memory context question
Date: 2006-02-06 05:17:27
Message-ID: 18741.24.91.171.78.1139203047.squirrel@mail.mohawksoft.com (view raw or flat)
Thread:
Lists: pgsql-hackers
> On Sun February 5 2006 16:16, Tom Lane wrote:
>> AFAICT the data structures you are worried about don't have any readily
>> predictable size, which means there is no good way to keep them in
>> shared memory --- we can't dynamically resize shared memory.  So I think
>> storing the rules in a table and loading into private memory at need is
>> really the only reasonable solution.  Storing them in a table has a lot
>> of other advantages anyway, mainly that you can manipulate them from
>> SQL.
>
> I have come to the conclusion that storing the rules and various other
> bits in
> tables is the best solution, although this will require a much more
> complex
> db structure than I had originally planned. Trying to allocate and free
> memory in shared memory is fairly straightforward, but likely to become
> incredibly messy.
>
> Seeing as some of the rules already include load-value-from-db-on-demand,
> it
> should be fairly straightforward to extend it to
> load-rule-from-db-on-demand.
>

I posted some source to a shared memory sort of thing to the group, as
well as to you, I believe.

For variables and values that change very infrequently, using the DB is
the right idea. PostgreSQL, as well as most databases, crumble under a
highly changing database. By changing, I mean a lot of UPDATES and
DELETES. Inserts are not so bad. PostgreSQL has a fairl poor (IMHO) UPDATE
behaviour. Most transaction aware databases do, but PostgreSQL seems quite
bad.

For an example, if you are doing a scoreboard sort of thing for a website,
updating a single varible in a table 20 times a second, will quickly make
that simple and normally fast update/query take a very long time. You have
to run VACUUM a whole lot.

The next example is a session table for a website, you may have a few
hundred or a few thousand active session rows, but each row may get many
updates, and you may have tens of thousands of sessions which may be
inactive. Unless you vaccum very frequently, you are doing a lot of disk
I/O for every session, because the query has to walk the table file to
find a valid row.

A database is a BAD system to manage data like sessions in an active
website. It is a good tool for most all, but if you are implementing an
eBay or Yahoo, you'll swamp your DB quickly.

The issue with a shared memory system is that you don't get the data
security that you do with disk storage.


In response to

Responses

pgsql-hackers by date

Next:From: andrewDate: 2006-02-06 09:24:26
Subject: Re: look up tables while parsing queries
Previous:From: ITAGAKI TakahiroDate: 2006-02-06 04:27:21
Subject: TODO-Item: B-tree fillfactor control

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group