Re: Migration study, step 1: bulk write performance optimization

From: "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>
To: Mikael Carneholm <Mikael(dot)Carneholm(at)WirelessCar(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Migration study, step 1: bulk write performance optimization
Date: 2006-03-20 15:12:18
Message-ID: 441EC652.1010807@modgraph-usa.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Mikael Carneholm wrote:

> I am responisble for an exciting project of evaluating migration of a
> medium/large application for a well-known swedish car&truck manufacturer
> ... The goal right now is to find the set of parameters that gives as
> short bulk insert time as possible, minimizing downtime while the data
> itself is migrated.

If you haven't explored the COPY command yet, check it out. It is stunningly fast compared to normal INSERT commands.

http://www.postgresql.org/docs/8.1/static/sql-copy.html

pg_dump and pg_restore make use of the COPY command. Since you're coming from a different vendor, you'd have to dump the data into a COPY-compatible set of files yourself. But it will be worth the effort.

Craig

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Csaba Nagy 2006-03-20 15:19:12 Re: Migration study, step 1: bulk write performance
Previous Message Marco Furetto 2006-03-20 14:59:25 Query Feromance