| From: | "Steinar H(dot) Gunderson" <sgunderson(at)bigfoot(dot)com> |
|---|---|
| To: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: How import big amounts of data? |
| Date: | 2005-12-29 10:50:49 |
| Message-ID: | 20051229105049.GB27287@uio.no |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
On Thu, Dec 29, 2005 at 10:48:26AM +0100, Arnau wrote:
> Which is the best way to import data to tables? I have to import
> 90000 rows into a column and doing it as inserts takes ages. Would be
> faster with copy? is there any other alternative to insert/copy?
There are multiple reasons why your INSERT might be slow:
- Are you using multiple transactions instead of batching them in all or a
few transactions? (Usually, the per-transaction cost is a lot higher than
the per-row insertion cost.)
- Do you have a foreign key without a matching index in the other table? (In
newer versions of PostgreSQL, EXPLAIN ANALYZE can help with this; do a
single insert and see where it ends up doing work. Older won't show such
things, though.)
- Do you have an insertion trigger taking time? (Ditto wrt. EXPLAIN ANALYZE.)
COPY will be faster than INSERT regardless, though (for more than a few rows,
at least).
/* Steinar */
--
Homepage: http://www.sesse.net/
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Dennis Bjorklund | 2005-12-29 11:39:06 | Re: How import big amounts of data? |
| Previous Message | Arnau | 2005-12-29 09:48:26 | How import big amounts of data? |