| From: | Scott Ribe <scott_ribe(at)elevated-dev(dot)com> |
|---|---|
| To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
| Cc: | Kido Kouassi <jjkido(at)gmail(dot)com>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org> |
| Subject: | Re: Read performance on Large Table |
| Date: | 2015-05-21 15:18:30 |
| Message-ID: | 580E17C5-363E-4D24-B381-80A1F4C83C81@elevated-dev.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
On May 21, 2015, at 9:05 AM, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> wrote:
>
> I've done a lot of partitioning of big data sets in postgresql and if
> there's some common field, like data, that makes sense to partition
> on, it can be a huge win.
Indeed. I recently did it on exactly this kind of thing, a log of activity. And the common queries weren’t slow at all.
But if I wanted to upgrade via dump/restore with minimal downtime, rather than set up Slony or try my luck with pg_upgrade, I could dump the historical partitions, drop those tables, then dump/restore, then restore the historical partitions at my convenience. (In this particular db, history is unusually huge compared to the live data.)
--
Scott Ribe
scott_ribe(at)elevated-dev(dot)com
http://www.elevated-dev.com/
https://www.linkedin.com/in/scottribe/
(303) 722-0567 voice
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2015-05-21 15:21:59 | Re: Read performance on Large Table |
| Previous Message | Scott Ribe | 2015-05-21 15:09:26 | Re: Read performance on Large Table |