From: | Nikhil Sontakke <nikhil(dot)sontakke(at)enterprisedb(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | Josh Berkus <josh(at)agliodbs(dot)com>, Andrew Dunstan <andrew(at)dunslane(dot)net>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Ragged CSV import |
Date: | 2009-09-10 08:38:24 |
Message-ID: | a301bfd90909100138j628fcfb2oc2fb3a2bbe002fda@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
> the two most
> useful are to read in only some of the defined columns, and to output
> to
> a separate disk file any rows which failed to match the expected
> format.
> The latter would not cause the copy to fail unless the count of such
> rows exceeded a user-specified threshold.
>
+1
Allowing the capability to handle rows that might get discarded due to
constraint violations, bad column inputs etc. sounds like a big help
while doing large copy operations.
Another capability would be to transform the input column via some sql
expressions before loading it into the table. Given the way update
works, this could avoid the unnecessary subsequent bloat to fine-tune
some of the columns.
Regards,
Nikhils
--
http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri Fontaine | 2009-09-10 09:24:52 | Re: Ragged CSV import |
Previous Message | Peter Eisentraut | 2009-09-10 08:24:09 | Re: Ragged CSV import |