From: | "Shoaib Mir" <shoaibmir(at)gmail(dot)com> |
---|---|
To: | "Johann Spies" <jspies(at)sun(dot)ac(dot)za>, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Handling large volumes of data |
Date: | 2008-04-08 09:49:04 |
Message-ID: | bf54be870804080249lf3252b8xb1547ce46a3258bc@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Tue, Apr 8, 2008 at 7:42 PM, Johann Spies <jspies(at)sun(dot)ac(dot)za> wrote:
> Apparently the best approach is not to have very large tables. I am
> thinking of making (as far as the firewall is concerned) a different
> table for each day and then drop the older tables as necessary.
>
> Any advice on how to best handle this kind of setup will be
> appreciated.
>
>
Table paritioning is what you need -->
http://www.postgresql.org/docs/current/static/ddl-partitioning.html and then
I will also advise distribute your tables across different disks through
tablespaces. Tweak the shared buffers and work_mem settings as well.
--
Shoaib Mir
Fujitsu Australia Software Technology
shoaibm(at)fast(dot)fujitsu(dot)com(dot)au
From | Date | Subject | |
---|---|---|---|
Next Message | Tino Schwarze | 2008-04-08 09:55:00 | Re: Handling large volumes of data |
Previous Message | Johann Spies | 2008-04-08 09:42:34 | Handling large volumes of data |