From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | George Robinson II <george(dot)robinson(at)eurekabroadband(dot)com> |
Cc: | pgsql-general(at)postgreSQL(dot)org |
Subject: | Re: vacuumdb failed |
Date: | 2000-08-27 07:51:50 |
Message-ID: | 23464.967362710@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
George Robinson II <george(dot)robinson(at)eurekabroadband(dot)com> writes:
> Last night, while my perl script was doing a huge insert operation, I
> got this error...
> DBD::Pg::st execute failed: ERROR: copy: line 4857, pg_atoi: error
> reading "2244904358": Result too large
> Now, I'm not sure if this is related, but while trying to do vacuumdb
> <dbname>, I got...
> NOTICE: FlushRelationBuffers(all_flows, 500237): block 171439 is
> referenced (private 0, global 1)
> FATAL 1: VACUUM (vc_repair_frag): FlushRelationBuffers returned -2
Probably not related. We've seen sporadic reports of this error in 7.0,
but it's been tough to get enough info to figure out the cause. If you
can find a reproducible way to create the block-is-referenced condition
we'd sure like to know about it!
As a quick-hack recovery, you should find that stopping and restarting the
postmaster will eliminate the VACUUM failure. The block-is-referenced
condition is not all that dangerous in itself; VACUUM is just being
paranoid about the possibility that someone is using the table that it
thinks it has an exclusive lock on.
> Any ideas? I'm trying a couple other things right now. By the way,
> this database has one table that is HUGE. What is the limit on table
> size in postgresql7? The faq says unlimited. If that's true, how do
> you get around the 2G file size limit that (at least) I have in solaris
> 2.6?
We break tables into multiple physical files of 1Gb apiece.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Marcin Inkielman | 2000-08-27 09:34:51 | Re: table count limitation |
Previous Message | Jurgen Defurne | 2000-08-27 05:03:02 | Re: table count limitation |