Re: [SQL] Slow Inserts Again

From: pierre(at)desertmoon(dot)com
To: fmorton(at)base2inc(dot)com (Frank Morton)
Cc: vadim(at)krs(dot)ru, pgsql-sql(at)hub(dot)org
Subject: Re: [SQL] Slow Inserts Again
Date: 1999-05-03 14:06:53
Message-ID: 19990503140653.7998.qmail@desertmoon.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-sql

Hmmm
I've had problems with punctuation and stuff when importing large quantities
of test into my DB. But I've always had success using copy. Have you tried
using perl to munge your data and escape the appropriate characters?

I've always used the following to import data into a clean DB.

copy fubar from '/home/pierre/data/fubar.txt' using delimiters ',';

How are you building your import files? That is how are you putting your data
together?

For me simply using a regex of: s/'/''/g and s/,/\\,/g on each text field BEFORE I
dump it into my data file is sufficient to allow it to be imported using the copy
command.

So...for a table that has three varchar columns, A/B/C my data file might look
like:

However\, I''m here.,Don''t take me serisouly.,Hi there!

The above would be imported correctly. I may be missing something as I just
started reading this thread, but I hope this helps...

-=pierre

>
> >> This last attempt, I bracket each insert statement with
> > ^^^^^^^^^^^^^^^^^^^^^
> >> "begin;" and "end;".
> >
> >Why _each_?
> >Enclose ALL statements by begin; & end; to insert ALL data
> >in SINGLE transaction:
>
> This was suggested by someone on the list so that all
> 150,000 inserts would not be treated as one large transaction.
>
> Like I said before, I have tried all suggestions without success.
>
>
>
>
>
>

In response to

Browse pgsql-sql by date

  From Date Subject
Next Message Frank Morton 1999-05-03 14:10:06 Re: [SQL] Slow Inserts Again
Previous Message Tom Lane 1999-05-03 13:52:54 Re: [SQL] Index on date_trunc