True. I implemented the SAVEPOINTs solution across the board. We'll see
what kind of difference it makes. If it's fast enough, I may be able to do
On Tue, Feb 21, 2012 at 3:53 PM, Samuel Gendler
> On Tue, Feb 21, 2012 at 9:59 AM, Alessandro Gagliardi <alessandro(at)path(dot)com
> > wrote:
>> I was thinking about that (as per your presentation last week) but my
>> problem is that when I'm building up a series of inserts, if one of them
>> fails (very likely in this case due to a unique_violation) I have to
>> rollback the entire commit. I asked about this in the novice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum and was advised to use
>> SAVEPOINTs. That seems a little clunky to me but may be the best way.
>> Would it be realistic to expect this to increase performance by ten-fold?
> if you insert into a different table before doing a bulk insert later, you
> can de-dupe before doing the insertion, eliminating the issue entirely.
In response to
pgsql-performance by date
|Next:||From: Alessandro Gagliardi||Date: 2012-02-22 23:50:57|
|Subject: set autovacuum=off|
|Previous:||From: Samuel Gendler||Date: 2012-02-21 23:53:01|
|Subject: Re: Indexes and Primary Keys on Rapidly Growing Tables|