Re: Large Tables(>1 Gb)

From: Ron Peterson <rpeterson(at)yellowbank(dot)com>
To: Fred_Zellinger(at)seagate(dot)com
Cc: pgsql-general(at)hub(dot)org
Subject: Re: Large Tables(>1 Gb)
Date: 2000-06-30 19:25:13
Message-ID: 395CF419.EDFDEB12@yellowbank.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Fred_Zellinger(at)seagate(dot)com wrote:

> However, there is still something bugging me. Even though many people
> related stories of 7.5 Gb+ Dbs, I still can't make that little voice in me
> quit saying "breaking things into smaller chunks means faster work"
> theories.
>
> There must exist a relationship between file sizes and DB performance.

If your data doesn't completely fit into main memory, at least some of
it will have to be saved off-line somewhere. Your question is: should
the off-line portion be split into more than one file to speed
performance?

I won't try to be precise here. There are good textbooks on the subject
if your interested. I've just been reading one, actually, but it's at
home and I don't remember the name :( Knuth would of course be good
reading on the subject.

Maybe think of it this way: what's the difference between one file and
two, really? You've basically just got a bunch of bits on a block
device, either way. By saving your data to a single file, you have more
control of the data layout, so you can organize it in the manner most
appropriate to your needs.

________________________
Ron Peterson
rpeterson(at)yellowbank(dot)com

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Mitch Vincent 2000-06-30 20:45:16 Trigger programming..
Previous Message Gilles DAROLD 2000-06-30 19:17:10 CURSOR problem