Re: Indexes and Primary Keys on Rapidly Growing Tables

From: Alessandro Gagliardi <alessandro(at)path(dot)com>
To: Josh Berkus <josh(at)agliodbs(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Indexes and Primary Keys on Rapidly Growing Tables
Date: 2012-02-21 17:59:40
Message-ID: CAAB3BBJRh47qLa6sjEU3KyUhMGLX2=vQE=T6i4fwu6P+rzKuCw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I was thinking about that (as per your presentation last week) but my
problem is that when I'm building up a series of inserts, if one of them
fails (very likely in this case due to a unique_violation) I have to
rollback the entire commit. I asked about this in the
novice<http://postgresql.1045698.n5.nabble.com/execute-many-for-each-commit-td5494218.html>forum
and was advised to use
SAVEPOINTs. That seems a little clunky to me but may be the best way. Would
it be realistic to expect this to increase performance by ten-fold?

On Mon, Feb 20, 2012 at 3:30 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:

> On 2/20/12 2:06 PM, Alessandro Gagliardi wrote:
> > . But first I just want to know if people
> > think that this might be a viable solution or if I'm barking up the wrong
> > tree.
>
> Batching is usually helpful for inserts, especially if there's a unique
> key on a very large table involved.
>
> I suggest also making the buffer table UNLOGGED, if you can afford to.
>
> --
> Josh Berkus
> PostgreSQL Experts Inc.
> http://pgexperts.com
>
> --
> Sent via pgsql-performance mailing list (pgsql-performance(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-performance
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Samuel Gendler 2012-02-21 23:53:01 Re: Indexes and Primary Keys on Rapidly Growing Tables
Previous Message Josh Berkus 2012-02-20 23:30:06 Re: Indexes and Primary Keys on Rapidly Growing Tables