| From: | Oliver Jowett <oliver(at)opencloud(dot)com> |
|---|---|
| To: | Steve Wampler <swampler(at)noao(dot)edu> |
| Cc: | pgsql-jdbc(at)postgresql(dot)org |
| Subject: | Re: Inserting a large number of records |
| Date: | 2005-07-15 08:33:35 |
| Message-ID: | 42D774DF.6010508@opencloud.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-jdbc |
Steve Wampler wrote:
> Oliver Jowett wrote:
>
>>Greg Alton wrote:
>>
>>
>>>What is the most efficient way to insert a large number of records into
>>>a table?
>>
>>
>>I use a PreparedStatement INSERT and addBatch() / executeBatch() with
>>autocommit off and no constraints or indexes present.
>
>
> Does anyone have an idea as to how the performance of this would compare
> to using a COPY? I've used the COPY patches with jdbc and 7.4.x with
> impressive results, but if the above is 'nearly' as good then I don't have
> to put off upgrading to 8.x while waiting on jdbc to officially include
> support for COPY. (I can't test the above right now. Maybe soon, though.)
I have one dataset that is about 20 million rows and takes about 40
minutes to import via batched INSERTs including translation from the
original format (I'd guess perhaps 10-15% overhead). The same dataset
dumped by pg_dump in COPY format takes about 15 minutes to restore
(using psql not JDBC though)
-O
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Dave Cramer | 2005-07-15 11:47:42 | Re: Inserting a large number of records |
| Previous Message | Csaba Nagy | 2005-07-15 08:12:27 | Re: Using a 7_4 JDBC driver to connect to 8.0 |