Optimizing huge inserts/copy's

From: Webb Sprague <wsprague100(at)yahoo(dot)com>
To: pgsql-sql(at)postgresql(dot)org
Subject: Optimizing huge inserts/copy's
Date: 2000-08-30 00:04:52
Message-ID: 20000830000452.2110.qmail@web802.mail.yahoo.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hi all,

Does anybody have any thoughts on optimizing a huge
insert, involving something like 3 million records all
at once? Should I drop my indices before doing the
copy, and then create them after? I keep a
tab-delimited file as a buffer, copy it, then do it
again about 400 times. Each separate buffer is a few
thousand records.

We do this at night, so it's not the end of the world
if it takes 8 hours, but I would be very grateful for
some good ideas...

Thanks
W

__________________________________________________
Do You Yahoo!?
Yahoo! Mail - Free email you can access from anywhere!
http://mail.yahoo.com/

Responses

Browse pgsql-sql by date

  From Date Subject
Next Message Webb Sprague 2000-08-30 00:37:14 Create Primary Key?
Previous Message Tom Lane 2000-08-29 23:21:57 Re: Problems with complex queries ...