From: | Paul Lindner <lindner(at)inuus(dot)com> |
---|---|
To: | Gregory Maxwell <gmaxwell(at)gmail(dot)com> |
Cc: | Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au>, Paul Lindner <lindner(at)inuus(dot)com>, andrew(at)supernews(dot)com, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Differences in UTF8 between 8.0 and 8.1 |
Date: | 2005-11-01 12:15:50 |
Message-ID: | 20051101121550.GE23652@inuus.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sun, Oct 30, 2005 at 11:49:41AM -0500, Gregory Maxwell wrote:
> On 10/26/05, Christopher Kings-Lynne <chriskl(at)familyhealth(dot)com(dot)au> wrote:
> > > iconv -c -f UTF8 -t UTF8
> > recode UTF-8..UTF-8 < dump_in.sql > dump_out.sql
>
> I've got a file with characters that pg won't accept that recode does
> not fix but iconv does. Iconv is fine for my application, so I'm just
> posting to the list so that anyone looking for why recode didn't work
> for them will find the suggestion to use iconv.
recode did not work for my sample data. It passed through the problem
character sequences. I'm still looking for an iconv that doesn't read
the entire file into memory.
At this point I'm looking to use the split command to process input in
10000 line chunks. Sadly that can't be used in a pipe.
BTW, how will sites that use Slony deal with this issue?
--
Paul Lindner ||||| | | | | | | | | |
lindner(at)inuus(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2005-11-01 12:43:15 | [Fwd: Re: regression failures on WIndows in machines with some non-English locales] |
Previous Message | Salman Razzaq | 2005-11-01 11:34:04 | Adding a column in pg_proc for storing default values of arguments |