From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alessandro Gagliardi <alessandro(at)path(dot)com> |
Cc: | pgsql-novice(at)postgresql(dot)org |
Subject: | Re: execute many for each commit |
Date: | 2012-02-17 23:16:27 |
Message-ID: | 6801.1329520587@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-novice |
Alessandro Gagliardi <alessandro(at)path(dot)com> writes:
> This is really more of a psycopg2 than a PostgreSQL question per se, but
> hopefully there are a few Pythonistas on this list who can help me out. At
> a recent PUG meeting I was admonished on the folly of committing after
> every execute statement (especially when I'm executing hundreds of inserts
> per second). I was thinking of batching a bunch of execute statements (say,
> 1000) before running a commit but the problem is that if any one of those
> inserts fail (say, because of a unique_violation, which happens quite
> frequently) then I have to rollback the whole batch. Then I'd have to come
> up with some logic to retry each one individually or something similarly
> complicated.
Subtransactions (savepoints) are considerably cheaper than full
transactions. Alternatively you could consider turning off
synchronous_commit, if you don't need a guarantee that COMMIT means "it's
already safely on disk".
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Ganesh Borse | 2012-02-20 07:10:56 | Request for help on selecting bytea columns using ODBC (windows) from Postgresql DB |
Previous Message | Alessandro Gagliardi | 2012-02-17 23:04:16 | Foreign Key to an (abstract?) Parent Table |