From: | Jan Wieck <JanWieck(at)Yahoo(dot)com> |
---|---|
To: | Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net> |
Cc: | postgres list <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Backing up 16TB of data (was Re: > 16TB worth of |
Date: | 2003-04-25 22:20:35 |
Message-ID: | 3EA9B4B3.38158E6D@Yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Ron Johnson wrote:
>
> On Mon, 2003-04-21 at 13:23, Jeremiah Jahn wrote:
> > I have a system that will store about 2TB+ of images per year in a PG
> > database. Linux unfortunatly has the 16TB limit for 32bit systems. Not
> > really sure what should be done here. Would life better to not store the
> > images as BLOBS, and instead come up with some complicated way to only
> > store the location in the database, or is there someway to have postgres
> > handle this somehow? What are other people out there doing about this
> > sort of thing?
>
> Now that the hard disk and file system issues have been hashed around,
> have you thought about how you are going to back up this much data?
Legato had shown a couple years ago already that Networker can backup
more than a Terabyte per hour. They used an RS6000 with over 100 disks
and 36 DLT 7000 drives on 16 controllers if I recall correctly ... not
your average backup solution but it's possible. But I doubt one can
configure something like this with x86 hardware.
Jan
--
#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #
From | Date | Subject | |
---|---|---|---|
Next Message | scott.marlowe | 2003-04-25 22:42:16 | Re: Backing up 16TB of data (was Re: > 16TB worth of |
Previous Message | Tom Lane | 2003-04-25 22:17:53 | Re: pq_recvbuf: unexpected EOF |