Re: storing binary files / memory limit

From: Andrew McMillan <andrew(at)catalyst(dot)net(dot)nz>
To: Tomas Vondra <tv(at)fuzzy(dot)cz>
Cc: pgsql-php(at)postgresql(dot)org
Subject: Re: storing binary files / memory limit
Date: 2008-10-11 20:23:39
Message-ID: 1223756619.26952.7.camel@happy.mcmillan.net.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-php

On Sat, 2008-10-11 at 18:26 +0200, Tomas Vondra wrote:
>
> Is there any other way to solve storing of large files in PostgreSQL?

No, not until there are functions that let you fopen() on the bytea
column.

Also, your "... || more_column" solution will generate large numbers of
dead rows and require frequent vacuuming.

> - Optimization is a serious criterion, as is reliability.

If you're using tables with very large columns, make sure you index on
every other column you're going to access it by. If PostgreSQL has to
resort to full-table scans on this table, and especially with a low
memory constraint, you could easily end up with it doing an on-disk sort
on a copy of the data.

If you *have* to store it in a table column (and it really isn't the
most efficient way of doing it) then create a separate table for it
which is just SERIAL + data.

Cheers,
Andrew McMillan.
------------------------------------------------------------------------
Andrew @ McMillan .Net .NZ Porirua, New Zealand
http://andrew.mcmillan.net.nz/ Phone: +64(272)DEBIAN
It is often easier to tame a wild idea than to breathe life into a
dull one. -- Alex Osborn

------------------------------------------------------------------------

In response to

Responses

Browse pgsql-php by date

  From Date Subject
Next Message Tomas Vondra 2008-10-11 21:41:00 Re: storing binary files / memory limit
Previous Message Tomas Vondra 2008-10-11 16:26:53 storing binary files / memory limit