| From: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
|---|---|
| To: | Ted Rolle <ted(at)tvlconn(dot)com> |
| Cc: | "'pgsql-admin'" <pgsql-admin(at)postgresql(dot)org> |
| Subject: | Re: Fast load |
| Date: | 2001-08-24 23:59:36 |
| Message-ID: | Pine.LNX.4.30.0108250154060.677-100000@peter.localdomain |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
Ted Rolle writes:
> We have 73 databases, two dozen with hundreds of thousands to millions of
> records, with lengths in the 500-byte range. I'm planning to convert them
> >from Btrieve to PostgreSQL.
>
> Of course, I want the highest reasonable speed so that the conversion can be
> completed - say - in a week-end.
The fastest possible way to get data loaded into PostgreSQL is to create a
tab-delimited file and feed it directly to the backend with the COPY
command. To speed things up even more, turn off fsync (-F), create the
indexes after loading, and the same with triggers, if you have any. I'd
like to think that all of this should take significantly less than a
weekend. ;-)
Formatting the data into the right format for COPY can be done with your
favourite text mashing tools.
--
Peter Eisentraut peter_e(at)gmx(dot)net http://funkturm.homeip.net/~peter
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Twist | 2001-08-27 17:08:09 | Postgres LOG and NOTICE info.. |
| Previous Message | Ted Rolle | 2001-08-24 23:00:15 | RE: Fast load |