From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "scott(dot)marlowe" <scott(dot)marlowe(at)ihs(dot)com> |
Cc: | Kevin White <kwhite(at)digital-ics(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Really bad insert performance: what did I do wrong? |
Date: | 2003-02-22 04:23:47 |
Message-ID: | 2703.1045887827@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"scott.marlowe" <scott(dot)marlowe(at)ihs(dot)com> writes:
> 3: Inserting ALL 700,000 rows in one transaction is probably not optimal.
> Try putting a test in every 1,000 or 10,000 rows to toss a "commit;begin;"
> pair at the database while loading. Inserting all 700,000 rows at once
> means postgresql can't recycle the transaction logs, so you'll have
> 700,000 rows worth of data in the transaction logs waiting for you to
> commit at the end.
That was true in 7.1.0, but we got rid of that behavior *very* quickly
(by 7.1.3, according to the release notes). Long transactions do not
currently stress the WAL storage any more than the same amount of work
in short transactions.
Which is not to say that there's anything wrong with divvying the work
into 1000-row-or-so transactions. I agree that that's enough to push
the per-transaction overhead down into the noise.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2003-02-23 20:44:06 | Re: performance issues for processing more then 150000 |
Previous Message | Tom Lane | 2003-02-22 02:24:39 | Re: Really bad insert performance: what did I do wrong? |