Re: GSOC'17 project introduction: Parallel COPY execution with errors handling

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Alexey Kondratov <kondratov(dot)aleksey(at)gmail(dot)com>
Cc: Alexander Korotkov <a(dot)korotkov(at)postgrespro(dot)ru>, Стас <stas(dot)kelvich(at)gmail(dot)com>, Craig Ringer <craig(at)2ndquadrant(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Subject: Re: GSOC'17 project introduction: Parallel COPY execution with errors handling
Date: 2017-04-11 13:45:28
Message-ID: CA+TgmoZDLfr4V+PhLwA=j9g9tuMvK7PqCfgPqaFJHOj83L=mkw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Mon, Apr 10, 2017 at 2:46 PM, Alexey Kondratov
<kondratov(dot)aleksey(at)gmail(dot)com> wrote:
> Yes, sure, I don't doubt it. The question was around step 4 in the following possible algorithm:
>
> 1. Suppose we have to insert N records
> 2. Start subtransaction with these N records
> 3. Error is raised on k-th line
> 4. Then, we know that we can safely insert all lines from the 1st till (k - 1)
> 5. Report, save to errors table or silently drop k-th line
> 6. Next, try to insert lines from (k + 1) till Nth with another subtransaction
> 7. Repeat until the end of file
>
> One can start subtransaction with those (k - 1) safe-lines and repeat it after each error line

I don't understand what you mean by that.

> OR
> iterate till the end of file and start only one subtransaction with all lines excepting error lines.

That could involve buffering a huge file. Imagine a 300GB load.

Also consider how many XIDs whatever design is proposed will blow
through when loading 300GB of data. There's a nasty trade-off here
between XID consumption (and the aggressive vacuums it eventually
causes) and preserving performance in the face of errors - e.g. if you
make k = 100,000 you consume 100x fewer XIDs than if you make k =
1000, but you also have 100x the work to redo (on average) every time
you hit an error. If the data quality is poor (say, 50% of lines have
errors) it's almost impossible to avoid runaway XID consumption.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Corey Huinker 2017-04-11 13:50:34 Re: Variable substitution in psql backtick expansion
Previous Message Tom Lane 2017-04-11 13:45:00 Re: Partitioned tables and relfilenode