From: | darrenk(at)insightdist(dot)com (Darren King) |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [HACKERS] Re: [QUESTIONS] Business cases |
Date: | 1998-01-17 23:44:54 |
Message-ID: | 9801172344.AA33848@ceodev |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > > Also, how are people handling tables with lots of rows? The 8k tuple
> > > size can waste a lot of space. I need to be able to handle a 2 million
> > > row table, which will eat up 16GB, plus more for indexes.
> >
> > This oen is improved upon in v6.3, where at compile time you can stipulate
> > the tuple size. We are looking into making this an 'initdb' option instead,
> > so that you can have the same binary for multiple "servers", but any database
> > created under a particular server will be constrained by that tuple size.
>
> That might help a bit, but same tables may have big rows and some not.
> For example, my 2 million row table requires only requires two date
> fields, and 7 integer fields. That isn't very much data. However, I'd
> like to be able to join against another table with much larger rows.
Two dates and 7 integers would make tuple of 90-some bytes, call it 100 max.
So you would prolly get 80 tuples per 8k page, so 25000 pages would use a
file of 200 meg.
The block size parameter will be database-specific, not table-specific, and
since you can't join tables from different _databases_, 2nd issue is moot.
If I could get around to the tablespace concept again, then maybe a different
block size per tablespace would be useful. But, that is putting the cart
a couple of light-years ahead of the proverbial horse...
Darren aka darrenk(at)insightdist(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | The Hermit Hacker | 1998-01-18 01:38:48 | S_LOCK() change produces error... |
Previous Message | Darren King | 1998-01-17 23:25:27 | Re: [HACKERS] Re: [QUESTIONS] Business cases |