On Thu, Dec 20, 2012 at 8:55 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> On 18 December 2012 22:10, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>> Well that would be nice, but the problem is that I see no way to
>> implement it. If, with a unified parser, the parser is 14% of our
>> source code, then splitting it in two will probably crank that number
>> up well over 20%, because there will be duplication between the two.
>> That seems double-plus un-good.
> I don't think the size of the parser binary is that relevant. What is
> relevant is how much of that is regularly accessed.
> Increasing parser cache misses for DDL and increasing size of binary
> overall are acceptable costs if we are able to swap out the unneeded
> areas and significantly reduce the cache misses on the well travelled
> portions of the parser.
I generally agree. We don't want to bloat the size of the parser with
wild abandon, but yeah if we can reduce the cache misses on the
well-travelled portions that seems like it ought to help. My previous
hacky attempt to quantify the potential benefit in this area was:
On my machine there seemed to be a small but consistent win; on a very
old box Jeff Janes tried, it didn't seem like there was any benefit at
all. Somehow, I have a feeling we're missing a trick here.
The Enterprise PostgreSQL Company
In response to
pgsql-hackers by date
|Next:||From: Robert Haas||Date: 2012-12-20 14:24:46|
|Subject: recent ALTER whatever .. SET SCHEMA refactoring|
|Previous:||From: Amit Kapila||Date: 2012-12-20 14:08:47|
|Subject: Re: ThisTimeLineID in checkpointer and bgwriter processes|