Skip site navigation (1) Skip section navigation (2)

Re: [Retrieved]RE: backup and recovery

From: Tsirkin Evgeny <tsurkin(at)mail(dot)jct(dot)ac(dot)il>
To: Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us>,Naomi Walker <nwalker(at)eldocomp(dot)com>
Cc: "Mark M(dot) Huber" <MHuber(at)VMdirect(dot)com>,"pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: [Retrieved]RE: backup and recovery
Date: 2004-03-24 17:54:23
Message-ID: opr5dn0x0kjbdarf@localhost (view raw or flat)
Thread:
Lists: pgsql-admin
On Tue, 23 Mar 2004 19:50:24 -0500 (EST), Bruce Momjian 
<pgman(at)candle(dot)pha(dot)pa(dot)us> wrote:

> Naomi Walker wrote:
>>
>> I'm not sure of the correct protocol for getting things on the "todo"
>> list.  Whom shall we beg?
>>
>
> Uh, you just ask and we discuss it on the list.
>
> Are you using INSERTs from pg_dump?  I assume so because COPY uses a
> single transaction per command.  Right now with pg_dump -d I see:
> 	
> 	--
> 	-- Data for Name: has_oids; Type: TABLE DATA; Schema: public; Owner:
> 	postgres
> 	--
> 	
> 	INSERT INTO has_oids VALUES (1);
> 	INSERT INTO has_oids VALUES (1);
> 	INSERT INTO has_oids VALUES (1);
> 	INSERT INTO has_oids VALUES (1);
>
> Seems that should be inside a BEGIN/COMMIT for performance reasons, and
> to have the same behavior as COPY (fail if any row fails).  Commands?
>
> As far as skipping on errors, I am unsure on that one, and if we put the
> INSERTs in a transaction, we will have no way of rolling back only the
> few inserts that fail.
>
That is right but there are sutuation when you prefer at least some
data to be inserted and not all changes to be ralled back because
of errors.
> ---------------------------------------------------------------------------
>
>> >
>> >That brings up a good point.  It would be extremely helpful to add two
>> >parameters to pg_dump.  One, to add how many rows to insert before a
>> >commit, and two, to live through X number of errors before dying (and
>> >putting the "bad" rows in a file).
>> >
>> >
>> >At 10:15 AM 3/19/2004, Mark M. Huber wrote:
>> > >What it was that I guess the pg_dump makes one large transaction and 
>> our
>> > >shell script wizard wrote a perl program to  add a commit transaction
>> > >every 500 rows or what every you set. Also I should have said that 
>> we were
>> > >doing the recovery with the insert statements created from pg_dump. 
>> So...
>> > >my 500000 row table recovery took < 10 Min.
>> > >
>



In response to

Responses

pgsql-admin by date

Next:From: Bruce MomjianDate: 2004-03-24 18:15:19
Subject: Re: [Retrieved]RE: backup and recovery
Previous:From: Tsirkin EvgenyDate: 2004-03-24 17:49:12
Subject: Re: [Retrieved]RE: backup and recovery

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group