From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Joel Burton" <joel(at)joelburton(dot)com> |
Cc: | "Lucas Rockwell" <lr(at)socrates(dot)Berkeley(dot)EDU>, pgsql-admin(at)postgresql(dot)org |
Subject: | Re: smart copy? |
Date: | 2002-04-26 22:50:42 |
Message-ID: | 12691.1019861442@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
"Joel Burton" <joel(at)joelburton(dot)com> writes:
> $ pg_dump --attribute-inserts --data-only --table=TheTable A
>
> It's slower than running COPY, since you're executing
> dozens/hundreds/kajillions of INSERT commands rather than one COPY command.
Yup, lots slower. A better idea (which also doesn't require redoing
the export) is:
1. Create a temp table that matches the columns present in your data
file.
2. COPY from data file into temp table.
3. INSERT INTO target_table (column list) SELECT * FROM temp_table;
You can do pretty much any transformation you need to in the
INSERT/SELECT, so this generalizes to a lot of related cases
where your data file doesn't quite match the data layout you want.
> If that's a problem, you can reduce it a bit by dropping indexes and
> triggers on B.TheTable, doing the INSERTs, then re-adding these.
For a big import that will still be a good idea --- drop the indexes
and recreate after you've imported. As for triggers, that depends on
whether you need whatever they'd do ...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2002-04-26 23:33:57 | Re: smart copy? |
Previous Message | Jane Richards | 2002-04-26 22:29:49 | How do I get encrypted password access from a cgi application |