From: | Kenneth Marshall <ktm(at)rice(dot)edu> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Cc: | matt(at)conundrum(dot)com |
Subject: | Re: Rearchitecting for storage |
Date: | 2019-07-18 17:34:24 |
Message-ID: | 20190718173424.GB25488@aart.rice.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Matt,
On Thu, Jul 18, 2019 at 09:44:04AM -0400, Matthew Pounsett wrote:
> I've recently inherited a database that is dangerously close to outgrowing
> the available storage on its existing hardware. I'm looking for (pointers
> to) advice on scaling the storage in a financially constrained
> not-for-profit.
Have you considered using the VDO compression for tables that are less
update intensive. Using just compression you can get almost 4X size
reduction. For a database, I would forgo the deduplication function.
You can then use a non-compressed tablespace for the heavier I/O tables
and indexes.
>
> One of my anticipated requirements for any replacement we design is that I
> should be able to do upgrades of Postgres for up to five years without
> needing major upgrades to the hardware. My understanding of the standard
> upgrade process is that this requires that the data directory be smaller
> than the free storage (so that there is room to hold two copies of the data
> directory simultaneously). I haven't got detailed growth statistics yet,
> but given that the DB has grown to 23TB in 5 years, I should assume that it
> could double in the next five years, requiring 100TB of available storage
> to be able to do updates.
>
The link option with pg_upgrade does not require 2X the space, since it
uses hard links instead of copying the files to the new cluster.
Regards,
Ken
From | Date | Subject | |
---|---|---|---|
Next Message | Kumar, Virendra | 2019-07-18 19:58:14 | Possible Values of Command Tag in PG Log file |
Previous Message | Adrian Klaver | 2019-07-18 17:01:22 | Re: PostgreSQL as a Service |