From: | "Florian G(dot) Pflug" <fgp(at)phlo(dot)org> |
---|---|
To: | Dimitri Fontaine <dfontaine(at)hi-media(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: An idea for parallelizing COPY within one backend |
Date: | 2008-02-27 14:11:10 |
Message-ID: | 47C56F7E.3030301@phlo.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Dimitri Fontaine wrote:
> Of course, the backends still have to parse the input given by pgloader, which
> only pre-processes data. I'm not sure having the client prepare the data some
> more (binary format or whatever) is a wise idea, as you mentionned and wrt
> Tom's follow-up. But maybe I'm all wrong, so I'm all ears!
As far as I understand, pgloader starts N threads or processes that open
up N individual connections to the server. In that case, moving then
text->binary conversion from the backend into the loader won't give any
additional performace I'd say.
The reason that I'd love some within-one-backend solution is that I'd
allow you to utilize more than one CPU for a restore within a *single*
transaction. This is something that a client-side solution won't be able
to deliver, unless major changes to the architecture of postgres happen
first...
regards, Florian Pflug
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2008-02-27 14:20:10 | Re: idea: simple variadic functions in SQL and PL/pgSQL |
Previous Message | cristianopintado | 2008-02-27 14:10:41 | select avanced |