CSV-bulk import and defaults

From: Thomas Schmidt <postgres(at)stephan(dot)homeunix(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: CSV-bulk import and defaults
Date: 2011-01-02 22:22:14
Message-ID: 4D20FA96.3060306@stephan.homeunix.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hello,

well, I'm new to postgres and this is my post on this list :-)
Anyway, I've to batch-import bulk-csv data into a staging database (as
part of an ETL-"like" pocess). The data ought to be read via STDIN,
however for keeping in simple and stupid, saving it to a file and
importing afterwards is also an option. Sticking my nose into the docs,
I noticed that copy[1] as well as pg_import[2] are able to do it.

However, there are some additional columns of the staging table (job id,
etc.) that have to be set in order to identify imported rows. These
attributes are not part of the data coming from STDIN (since its
meta-data) and I see no way for specifying default values for "missing"
cvs columns. (imho copy and pg_bulkload will use table defaults for
missing rows - do I miss something?).

Thus - do you have any clue on designing an fast bulk-import for staging
data?

Thanks in advance,
Thomas

[1] http://www.postgresql.org/docs/9.0/static/sql-copy.html
[2] http://pgbulkload.projects.postgresql.org/pg_bulkload.html

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Peter Geoghegan 2011-01-02 22:51:13 Re: C++ keywords in headers (was Re: [GENERAL] #include <funcapi.h>)
Previous Message Ron Mayer 2011-01-02 22:12:33 Re: Postgres DOD Certification Common Criteria Level