On Sun, 26 Nov 2006, Greg Quinn wrote:
> Well, I am writing an email client where data will be stored on both the
> client and server. I have a table that stores all the message headers, and a
> table that stores the entire source for every message (including encoding
> for attachments etc.)
> Every time a user clicks on a mail folder, it pulls their message headers
> from the headers table. Every time a user clicks on a message, it needs to
> The message body etc. from the message source table.
> Now as you can imagine, on the server side, if you have 100 users, and all
> their message source sitting in one big table, it can slow down read
> operations because of all the disk i/o.
> Previously, I was using MySQL and placing all the users data into separate
> tables gave me a huge performance increase.
> I'm not sure if PostGreSQL will handle this better. But my main concern over
> this matter is the problem with Disk I/O on one big table.
That makes sense, although table partitioning might be a good fit for what
you're trying to do. I don't have enough personal experience with it to
give you a good sense of all of the restrictions, but you might want to
peruse the docs.
In response to
pgsql-novice by date
|Next:||From: Andrej Ricnik-Bay||Date: 2006-11-26 17:20:50|
|Subject: Re: Inserting values into a variable table|
|Previous:||From: operationsengineer1||Date: 2006-11-26 17:00:36|
|Subject: Re: Which installer for Postgres on Windows?|