Re: GSOC'17 project introduction: Parallel COPY execution with errors handling

From: Nicolas Barbier <nicolas(dot)barbier(at)gmail(dot)com>
To: Robert Haas <robertmhaas(at)gmail(dot)com>
Cc: Alexey Kondratov <kondratov(dot)aleksey(at)gmail(dot)com>, Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>, Стас <stas(dot)kelvich(at)gmail(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Subject: Re: GSOC'17 project introduction: Parallel COPY execution with errors handling
Date: 2017-04-12 17:18:21
Message-ID: CAP-rdTY_=n6hbe2Shg9qu0PxSw2NcA0yiWBwEwDCkzbBMh7tEA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

2017-04-11 Robert Haas <robertmhaas(at)gmail(dot)com>:

> There's a nasty trade-off here between XID consumption (and the
> aggressive vacuums it eventually causes) and preserving performance in
> the face of errors - e.g. if you make k = 100,000 you consume 100x
> fewer XIDs than if you make k = 1000, but you also have 100x the work
> to redo (on average) every time you hit an error.

You could make it dynamic: Commit the subtransaction even when not
encountering any error after N lines (N starts out at 1), then double
N and continue. When encountering an error, roll back the current
subtransaction back and re-insert all the known good rows that have
been rolled back (plus maybe the erroneous row into a separate table
or whatever) in one new subtransaction and commit; then reset N to 1
and continue processing the rest of the file.

That would work reasonable well whenever the ratio of erroneous rows
is not extremely high: whether the erroneous rows are all clumped
together, entirely randomly spread out over the file, or a combination
of both.

> If the data quality is poor (say, 50% of lines have errors) it's
> almost impossible to avoid runaway XID consumption.

Yup, that seems difficult to work around with anything similar to the
proposed. So the docs might need to suggest not to insert a 300 GB
file with 50% erroneous lines :-).

Greetings,

Nicolas

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Andres Freund 2017-04-12 17:21:51 Re: snapbuild woes
Previous Message Álvaro Hernández Tortosa 2017-04-12 17:14:39 Re: Some thoughts about SCRAM implementation