From: | Michael Richards <miker(at)scifair(dot)acadiau(dot)ca> |
---|---|
To: | Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Hot Backup Ability |
Date: | 1999-06-30 04:55:24 |
Message-ID: | Pine.BSF.4.10.9906300148380.12242-100000@scifair.acadiau.ca |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, 29 Jun 1999, Bruce Momjian wrote:
> > Just out of curiosity, I did a DUMP on the database while running a script
> > that ran a pile of updates. When I restored the database files, it was so
> > corrupted that I couldn't even run a select. vacuum just core dumped...
>
> When you say DUMP, you mean pg_dump, right? Are you using 6.5?
Erm. Well, no. I was running ufsdump. Once I read the section on mvcc and
re-did the test with the pg_dump, I realised that it does work as
documented...
I should think this is a good feature to broadcast to everyone. I don't
think other free systems support it.
The thing I got confuzed with that blocked transactions was the pg_vacuum.
Seeing as how it physically re-arranges data inside the tables and
indexes, is there any hope for not blocking the table for a long time as
it re-arranges a 15 gig table?
Will re-usable page support (whenever it is expected) eliminate the need
for vacuum?
Would it be easy to come up with a scheme for the vacuum function defrag a
set number of pages and such, release its locks if there is another
process blocked and waiting, then resume after that process is finished?
-Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Edmund Mergl | 1999-06-30 04:56:20 | Re: Perl library (was Building Postgres) |
Previous Message | Edmund Mergl | 1999-06-30 04:47:12 | Re: [HACKERS] Perl library (was Building Postgres) |