Re: tsearch profiling - czech environment - take 55MB

From: Alvaro Herrera <alvherre(at)commandprompt(dot)com>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL Hackers <pgsql-hackers(at)postgresql(dot)org>, Teodor Sigaev <teodor(at)sigaev(dot)ru>
Subject: Re: tsearch profiling - czech environment - take 55MB
Date: 2010-03-11 19:29:59
Message-ID: 20100311192959.GA3512@alvh.no-ip.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Pavel Stehule escribió:
> 2010/3/11 Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>:
> > Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> writes:
> >> The problem is in very large small allocations - there are 853215 nodes.
> >> I replaced palloc0 inside mkSPnode by balloc
> >
> > This goes back to the idea we've discussed from time to time of having a
> > variant memory context type in which pfree() is a no-op and we dispense
> > with all the per-chunk overhead.  I guess that if there really isn't any
> > overhead there then pfree/repalloc would actually crash :-( but for the
> > particular case of dictionaries that would probably be OK because
> > there's so little code that touches them.
>
> it has a sense. I was surprised how much memory is necessary :(. Some
> smarter allocation save 50% - 2.5G for 100 users, what is important,
> but I thing, so these data has to be shared. I believed to preloading,
> but it is problematic - there are no data in shared preload time, and
> the allocated size is too big.

Could it be mmapped and shared that way?

--
Alvaro Herrera http://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Pavel Stehule 2010-03-11 19:32:45 Re: tsearch profiling - czech environment - take 55MB
Previous Message Robert Haas 2010-03-11 19:10:16 Re: Server crash with older tzload library