On 11/26/06, Greg Quinn <greg(at)officium(dot)co(dot)za> wrote:
> Every time a user clicks on a mail folder, it pulls their message headers
> from the headers table. Every time a user clicks on a message, it needs to
> pull The message body etc. from the message source table.
> Now as you can imagine, on the server side, if you have 100 users, and all
> their message source sitting in one big table, it can slow down read
> operations because of all the disk i/o.
OK, I have a problem with this one intellectually. As far as I'm concerned
I'd think I get a higher latency and bigger head-movement if the heads
have to dash back and forth over a larger are of disk to get to individual
tables than when manipulating a single bit of data?
> Previously, I was using MySQL and placing all the users data into separate
> tables gave me a huge performance increase.
> I'm not sure if PostGreSQL will handle this better. But my main concern over
> this matter is the problem with Disk I/O on one big table.
Me neither - I wouldn't think it does make a difference for the better, but
to be sure I'd try to bench-mark it, with the same data-volume once in
one big table, and once indisparate tables (and see what indexing does
in both cases).
In response to
pgsql-novice by date
|Next:||From: Tom Lane||Date: 2006-11-26 18:23:50|
|Subject: Re: Inserting values into a variable table |
|Previous:||From: Stephan Szabo||Date: 2006-11-26 17:12:04|
|Subject: Re: Inserting values into a variable table|