I'm not sure if this is the right forum for this question, so I'll start
here and escalate as necessary.
One of the things I do a lot is copy large tables from one database to
another. I'd started life using pgdump, but this requires I save the
whole table into an intermediate file (or play games with pipes).
Rather than doing either, I instead wrote a quick script which creates a
cursor at the source, and starts schlepping data from one to other. For
"small" tables (say, a million rows or less), this works just fine, but
when I try to copy a bigger table (tens of millions of rows), the
performance continually drops off, with the copy becoming slower and slower.
On 8.3-beta1, I see the memory utilization skyrocketing, specifically
the non-shared residential space of the postmaster daemon, driving the
whole system deep into swap (sooner or later), and bringing things to a
screeching halt. But on 8.1.4, I don't see memory utilization problem-
the memory utilization of a daemon pegs out at just a little larger than
the shared buffers size (i.e. what I'd expect)- but I still see the
It is definately the copy that is the problem- I've tried a variation of
my script with the copy commented out (just throwing the data away) to
test if it's the cursor- and I get 20K+ rows/second.from the cursor.
I've also tried breaking the copy up- finishing the current copy command
and starting a new one every million rows or so, and that doesn't help.
I've also fiddled with transactions, and rate-limiting the copy, and
neither of those help.
Is this a bug in postgres? If not, as I'm assume, what should I be
doing to make this work fast?
pgsql-novice by date
|Next:||From: Tom Lane||Date: 2007-10-22 20:28:29|
|Subject: Re: Big copy slowdown |
|Previous:||From: A. Kretschmer||Date: 2007-10-22 16:00:29|
|Subject: Re: Query to detect long-running transactions?|