Re: profiling connection overhead

From: Andres Freund <andres(at)anarazel(dot)de>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Bruce Momjian <bruce(at)momjian(dot)us>, pgsql-hackers(at)postgresql(dot)org, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com>
Subject: Re: profiling connection overhead
Date: 2010-11-29 17:24:54
Message-ID: 201011291824.54762.andres@anarazel.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Monday 29 November 2010 17:57:51 Robert Haas wrote:
> On Sun, Nov 28, 2010 at 11:51 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> > Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> >> Yeah, very true. What's a bit frustrating about the whole thing is
> >> that we spend a lot of time pulling data into the caches that's
> >> basically static and never likely to change anywhere, ever.
> >
> > True. I wonder if we could do something like the relcache init file
> > for the catcaches.
>
> Maybe. It's hard to know exactly what to pull in, though, nor is it
> clear to me how much it would really save. You've got to keep the
> thing up to date somehow, too.
>
> I finally got around to doing some testing of
> page-faults-versus-actually-memory-initialization, using the attached
> test program, compiled with warnings, but without optimization.
> Typical results on MacOS X:
>
> first run: 297299
> second run: 99653
>
> And on Fedora 12 (2.6.32.23-170.fc12.x86_64):
>
> first run: 509309
> second run: 114721
Hm. A quick test shows that its quite a bit faster if you allocate memory
with:
size_t s = 512*1024*1024;
char *bss = mmap(0, s, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_POPULATE|
MAP_ANONYMOUS, -1, 0);

Andres

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2010-11-29 17:34:02 Re: profiling connection overhead
Previous Message Dimitri Fontaine 2010-11-29 17:21:31 Re: pg_execute_from_file review