Re: Postgres & large objects

From: Bradley Kieser <brad(at)kieser(dot)net>
To: Matt Clark <matt(at)ymogen(dot)net>
Cc: pgsql-admin(at)postgresql(dot)org, emberson(at)phc(dot)net
Subject: Re: Postgres & large objects
Date: 2004-05-06 10:03:25
Message-ID: 409A0D6D.3040200@kieser.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Matt,

Not really the answer that you are looking for and you may already do
this, but if it's a disk space or performance issue then I would suggest
moving the PGDATA dir (or the location if you are using locations) onto
a RAID5 disk array - means you can ramp up the space and you get the
performance gains of RAID5, not to mention the safety of a FS that
recovers from disk failure!

Brad

Matt Clark wrote:

> Hello all,
>
> It seems I'm trying to solve the same problem as Richard Emberson had
> a while ago (thread here:
> http://archives.postgresql.org/pgsql-general/2002-03/msg01199.php).
>
> Essentially I am storing a large number of large objects in the DB
> (potentially tens or hundreds of gigs), and would like the
> pg_largeobject table to be stored on a separate FS. But of course
> it's not just one file to symlink and then forget about, it's a number
> of files that get created.
>
> So, has anyone come up with a way to get the files for a table created
> in a particular place? I know that tablespsaces aren't done yet, but
> a kludge will do (or a patch come to that - we're runing redhat's
> 7.2.3 RPMs, but could switch if necessary). I had thought that if the
> filenames were predictable it might be possible to precreate a bunch
> of zero-length files and symlink them in advance...
>
> Cheers
>
> Matt
>
>

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Martin Brommer 2004-05-06 12:46:29 Meta data corruption
Previous Message Matt Clark 2004-05-06 09:31:00 Re: Postgres & large objects