There is not much reason of concern.
Firstly the limit is not on the size of database that postgres can handle,
it is on the size of a single file that can be created on the filesystem and
the total size of filesystem.
We have database of 18GB and compressed dump files are between 2-3 GB.
On modern linux system the size of single file and files system is
quite large which depends
on filesystem type , architecture (32bit or 64bit) , glibc version and
(may be other things).
Refer Below for some info.
It may be a good idea to VERIFY the largest size of file that you can create
becuase that is going to limit the size of the database dump file. to verify
you may use dd command to create a file of say 5GB
$ dd if=/dev/zero of=test.dat bs=1024 count=5242880
$ ls -lh test.dat
Rajesh Kumar Mallah
On 10/11/05, Will Lewis <will_lewis(at)bristol-city(dot)gov(dot)uk> wrote:
> I sent this request recently but have heard nothing.
> I'm new to he whole procedure and may be doing this incorrectly.
> Please advise.
> Will Lewis
> Database Administrator (DBA)
> Central IT
> Romney House
> Bristol City Council
> (0117 9222736)
In response to
pgsql-admin by date
|Next:||From: Rajesh Kumar Mallah||Date: 2005-10-17 05:45:34|
|Subject: Re: query for view code|
|Previous:||From: Rajesh Kumar Mallah||Date: 2005-10-17 05:01:03|
|Subject: possibly outdated info in pg_stat_activity|