Re: Best COPY Performance

From: "Spiegelberg, Greg" <gspiegelberg(at)cranel(dot)com>
To: "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>, "Jim C(dot) Nasby" <jim(at)nasby(dot)net>
Cc: "Worky Workerson" <worky(dot)workerson(at)gmail(dot)com>, "Merlin Moncure" <mmoncure(at)gmail(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Best COPY Performance
Date: 2006-10-25 17:15:40
Message-ID: 82E74D266CB9B44390D3CCE44A781ED9016E003F@POSTOFFICE.cranel.local
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> -----Original Message-----
> From: pgsql-performance-owner(at)postgresql(dot)org
> [mailto:pgsql-performance-owner(at)postgresql(dot)org] On Behalf Of
> Craig A. James
> Sent: Wednesday, October 25, 2006 12:52 PM
> To: Jim C. Nasby
> Cc: Worky Workerson; Merlin Moncure; pgsql-performance(at)postgresql(dot)org
> Subject: Re: [PERFORM] Best COPY Performance
>
> Jim C. Nasby wrote:
> > Wait... so you're using perl to copy data between two tables? And
> > using a cursor to boot? I can't think of any way that could be more
> > inefficient...
> >
> > What's wrong with a plain old INSERT INTO ... SELECT? Or if
> you really
> > need to break it into multiple transaction blocks, at least don't
> > shuffle the data from the database into perl and then back into the
> > database; do an INSERT INTO ... SELECT with that same where clause.
>
> The data are on two different computers, and I do processing
> of the data as it passes through the application. Otherwise,
> the INSERT INTO ... SELECT is my first choice.

Would dblink() help in any way?

Greg

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Darci 2006-10-25 17:20:42 query slows down drastically with increased number of fields
Previous Message Craig A. James 2006-10-25 16:52:14 Re: Best COPY Performance