plsql gets "out of memory"

From: Rural Hunter <ruralhunter(at)gmail(dot)com>
To: pgsql-admin(at)postgresql(dot)org
Subject: plsql gets "out of memory"
Date: 2011-08-29 13:11:19
Message-ID: 4E5B8FF7.4040007@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Hi all,
I'm a newbie here. I'm trying to test pgsql with my mysql data. If the
performance is good, I will migrate from mysql to pgsql.
I installed pgsql 9.1rc on my Ubuntu server. I'm trying to import a
large sql file dumped from mysql into pgsql with 'plsql -f'. The file is
around 30G with bulk insert commands in it. It rans several hours and
then aborted with an "out of memory" error. This is the tail of the log
I got:
INSERT 0 280
INSERT 0 248
INSERT 0 210
INSERT 0 199
invalid command \n
out of memory

On server side, I only found these errors related to invalid UTF-8
characters which is related to escape characters when exported from mysql.
2011-08-29 19:19:29 CST ERROR: invalid byte sequence for encoding
"UTF8": 0x00
2011-08-29 19:55:35 CST LOG: unexpected EOF on client connection

My understanding is this is a client side issue and not related to any
server memory setting. But how can ajust the memory setting of the psql
program?

To handle the escape character '\' which is default in mysql but not in
pgsql, I have already made some rough modification to the exported sql
dump file:
sed "s/,'/,E'/g" |sed 's/\\0/ /g'. I guess there might still be some
characters missing handling and that might cause the insert command to
be split to several invalid pgsql commands. Would that be the cause of
the "out of memory" error?

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Rural Hunter 2011-08-29 13:37:08 Re: plsql gets "out of memory"
Previous Message Kevin Grittner 2011-08-29 12:33:07 Re: Get data back after drop Command