Re: An idea for parallelizing COPY within one backend

From: Brian Hurt <bhurt(at)janestcapital(dot)com>
To: Postgresql-Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: An idea for parallelizing COPY within one backend
Date: 2008-02-27 16:53:04
Message-ID: 47C59570.2000209@janestcapital.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Andrew Dunstan wrote:

>
>
> Florian G. Pflug wrote:
>
>>
>>> Would it be possible to determine when the copy is starting that
>>> this case holds, and not use the parallel parsing idea in those cases?
>>
>>
>> In theory, yes. In pratice, I don't want to be the one who has to
>> answer to an angry user who just suffered a major drop in COPY
>> performance after adding an ENUM column to his table.
>>
>>
>
> I am yet to be convinced that this is even theoretically a good path
> to follow. Any sufficiently large table could probably be partitioned
> and then we could use the parallelism that is being discussed for
> pg_restore without any modification to the backend at all. Similar
> tricks could be played by an external bulk loader for third party data
> sources.
>

I was just floating this as an idea- I don't know enough about the
backend to know if it was a good idea or not, it sounds like "not".

Brian

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Alvaro Herrera 2008-02-27 16:56:24 ResourceOwners for Snapshots? holdable portals
Previous Message Andrew Dunstan 2008-02-27 16:46:08 Re: An idea for parallelizing COPY within one backend