From: | "Jeff Hoffmann" <jeff(at)remapcorp(dot)com> |
---|---|
To: | "Michael A(dot) Koerber" <mak(at)ll(dot)mit(dot)edu> |
Cc: | <pgsql-general(at)postgreSQL(dot)org> |
Subject: | Re: [GENERAL] equivalent of sqlload? |
Date: | 1998-11-25 20:01:02 |
Message-ID: | 005801be18ae$54b3acb0$c525c4ce@remapcorp.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
>I am running v6.3.2 under Linux and have found that the "copy" command
>works only for small amounts of data.
i wouldn't say for only small amounts of data -- i've loaded over 5 million
records (700+ MB) into a table with copy. i don't know how long it took
because i just let it run overnight (it made a couple of indexes, too), but
it didn't crash (running on a PPro 180 with 96 MB RAM) and was done in the
morning.
>When trying to "copy" several
>thousand records I notice that system RAM and swap space continue to get
>eaten until there is no further memory available. "psql" then fails.
>What remains is a .../pgdata/base/XYZ file system with the table being
>copied into. That table may be several (tens, hundreds) of Meg in size,
>but a "psql -d XYS -c 'select count(*) table'" will only return a zero
>count.
you probably ran out of memory for the server process. check out "limit"
(or "ulimit") -- you should be able to bump up the datasize to 64m or so
(that's what mine is normally set to; i don't think i had to adjust it for
the 5 million record+ table)
>I don't know if there are any changes that can be made to speed this type
>of process up, but this is definitely a black-mark.
it is kind of ugly, but it gets the job done.
From | Date | Subject | |
---|---|---|---|
Next Message | Michael A. Koerber | 1998-11-25 20:46:35 | Re: [GENERAL] equivalent of sqlload? |
Previous Message | Michael A. Koerber | 1998-11-25 19:01:46 | Re: [GENERAL] equivalent of sqlload? |