Skip site navigation (1) Skip section navigation (2)

Re: [HACKERS] What I'm working on

From: The Hermit Hacker <scrappy(at)hub(dot)org>
To: Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>
Cc: Stupor Genius <stuporg(at)erols(dot)com>, pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] What I'm working on
Date: 1998-08-24 04:15:34
Message-ID: Pine.BSF.4.02.9808240112370.295-100000@thelab.hub.org (view raw or flat)
Thread:
Lists: pgsql-hackers
On Sun, 23 Aug 1998, Bruce Momjian wrote:

> [Charset iso-8859-1 unsupported, filtering to ASCII...]
> > > 	There *has* to be some overhead, performance wise, in the database
> > > having to keep track of row-spanning, and being able to reduce that, IMHO,
> > > is what I see being able to change the blocksize as doing...
> > 
> > If both features were present, I would say to increase the blocksize of
> > the db to the max possible.  This would reduce the number of tuples that
> > are spanned.  Each span would require another tuple fetch, so that could
> > get expensive with each successive span or if every tuple spanned.
> > 
> > But if we stick with 8k blocksizes, people with tuples between 8 and 16k
> > would get absolutely killed performance-wise.  Would make sense for them
> > to go to 16k blocks where the reading of the extra bytes per block would
> > be minimal, if anything, compared to the fetching/processing of the next
> > span(s) to assemble the whole tuple.
> > 
> > In summary, the capability to span would be the next resort after someone
> > has maxed out their blocksize.  Each OS would have a different blocksize
> > max...an AIX driver breaks when going past 16k...don't know about others.
> > 
> > I'd say make the blocksize a run-time variable and then do the spanning.
> 
> If we could query to find the file system block size at runtime in a
> portable way, that would help us pick the best block size, no?

	That doesn't sound too safe to me...what if I run out of disk
space on file system A (16k blocksize) and move one of the databases to
file system B (8k blocksize)?  If it auto-detects at run time, how is that
going to affect the tables?  Now my tuple size just dropp'd to 8k, but the
tables were using 16k tuples...

	Setting this should, I think, be a conscious decision on the
admins part, unless, of course, there is nothing in the tables themselves
that are "hard coded" at 8k tuples, and its purely in the server?  If it
is just in the server, then this would be cool, cause then I wouldn't have
to dump/reload if I moved to a better tuned file system..just move the
files :)

Marc G. Fournier                                
Systems Administrator @ hub.org 
primary: scrappy(at)hub(dot)org           secondary: scrappy(at){freebsd|postgresql}.org 


In response to

pgsql-hackers by date

Next:From: The Hermit HackerDate: 1998-08-24 04:18:36
Subject: Re: [HACKERS] What I'm working on
Previous:From: Vadim MikheevDate: 1998-08-24 04:12:56
Subject: Re: [HACKERS] Bogus "Non-functional update" notices

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group