From: | Marc Munro <marc(at)bloodnok(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org, simon(at)2ndquadrant(dot)com |
Subject: | Re: New feature proposal |
Date: | 2006-05-19 18:25:21 |
Message-ID: | 1148063122.26818.34.camel@bloodnok.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, 2006-05-19 at 10:05 -0700, Josh Berkus wrote:
> Marc,
>
> > The add-in would not "know" how much had been allocated to it, but could
> > be told through it's own config file. I envisage something like:
> >
> > in postgresql.conf
> >
> > # add_in_shmem = 0 # Amount of shared mem to set aside for add-ins
> > # in KBytes
> > add_in_shem = 64
> >
> >
> > in veil.conf
> >
> > veil_shmem = 32 # Amount of shared memory we can use from
> > # the postgres add-ins shared memory pool
> >
> > I think this is better than add-ins simply stealing from, and contending
> > for, postgres shared memory which is the only real alternative right
> > now.
>
> Hmmmm ... what would happen if I did:
>
> add_in_shmem = 64
> veil_shmem = 128
>
> or even:
>
> add_in_shmem = 128
> veil_shmem = 64
> plperl_shmem = 64
> pljava_shmem = 64
>
If that happens, one of the add-ins will be sadly disappointed when it
tries to use its allocation. The same as would happen now, if Veil
attempted to allocate too large a chunk of shared memory.
My proposal makes it possible for properly configured add-ins to have a
guaranteed amount of shared memory available. It allows add-ins to be
well-behaved in their use of shared memory, and it prevents them from
being able to exhaust postgres' own shared memory.
It doesn't prevent add-ins from over-allocating from the add-in memory
context, nor do I think it can or should do this.
__
Marc
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2006-05-19 18:39:45 | Re: Compression and on-disk sorting |
Previous Message | Tom Lane | 2006-05-19 18:23:18 | Re: [HACKERS] Toward A Positive Marketing Approach. |