On Mon, 27 Nov 2006 06:20:50 +1300, "Andrej Ricnik-Bay"
>On 11/26/06, Greg Quinn <greg(at)officium(dot)co(dot)za> wrote:
>> Every time a user clicks on a mail folder, it pulls their message headers
>> from the headers table. Every time a user clicks on a message, it needs to
>> pull The message body etc. from the message source table.
>> Now as you can imagine, on the server side, if you have 100 users, and all
>> their message source sitting in one big table, it can slow down read
>> operations because of all the disk i/o.
>OK, I have a problem with this one intellectually. As far as I'm concerned
>I'd think I get a higher latency and bigger head-movement if the heads
>have to dash back and forth over a larger are of disk to get to individual
>tables than when manipulating a single bit of data?
>> Previously, I was using MySQL and placing all the users data into separate
>> tables gave me a huge performance increase.
>> I'm not sure if PostGreSQL will handle this better. But my main concern over
>> this matter is the problem with Disk I/O on one big table.
>Me neither - I wouldn't think it does make a difference for the better, but
> to be sure I'd try to bench-mark it, with the same data-volume once in
>one big table, and once indisparate tables (and see what indexing does
>in both cases).
Look out for file fragmentation too!
If the volume where you store the data does not have a *large* free
area any big file would be fragmented and the head starts jumping like
Before benchmarking I would shut down the server and then defrag the
In response to
pgsql-novice by date
|Next:||From: Andrej Ricnik-Bay||Date: 2006-11-26 20:39:57|
|Subject: Re: Inserting values into a variable table|
|Previous:||From: Bo Berglund||Date: 2006-11-26 20:08:50|
|Subject: Re: Which installer for Postgres on Windows?|