From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Steve Eckmann <eckmann(at)computer(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: improving write performance for logging application |
Date: | 2006-01-04 00:00:12 |
Message-ID: | 27720.1136332812@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Steve Eckmann <eckmann(at)computer(dot)org> writes:
> We also found that we could improve MySQL performance significantly
> using MySQL's "INSERT" command extension allowing multiple value-list
> tuples in a single command; the rate for MyISAM tables improved to
> about 2600 objects/second. PostgreSQL doesn't support that language
> extension. Using the COPY command instead of INSERT might help, but
> since rows are being generated on the fly, I don't see how to use COPY
> without running a separate process that reads rows from the
> application and uses COPY to write to the database.
Can you conveniently alter your application to batch INSERT commands
into transactions? Ie
BEGIN;
INSERT ...;
... maybe 100 or so inserts ...
COMMIT;
BEGIN;
... lather, rinse, repeat ...
This cuts down the transactional overhead quite a bit. A downside is
that you lose multiple rows if any INSERT fails, but then the same would
be true of multiple VALUES lists per INSERT.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Steinar H. Gunderson | 2006-01-04 00:06:03 | Re: improving write performance for logging application |
Previous Message | Steve Eckmann | 2006-01-03 23:44:28 | improving write performance for logging application |