Re: insert/update tps slow with indices on table > 1M rows

From: "Heikki Linnakangas" <heikki(at)enterprisedb(dot)com>
To: "andrew klassen" <aptklassen(at)yahoo(dot)com>
Cc: "James Mansion" <james(at)mansionfamily(dot)plus(dot)com>, <pgsql-performance(at)postgresql(dot)org>
Subject: Re: insert/update tps slow with indices on table > 1M rows
Date: 2008-06-05 06:18:29
Message-ID: 48478535.2010905@enterprisedb.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

andrew klassen wrote:
> I am using the c-library interface and for these particular transactions
> I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
> followed by at most 300 EXECUTES and then a COMMIT. That is the
> general scenario. What value beyond 300 should I try?

Make sure you use the asynchronous PQsendQuery, instead of plain PQexec.
Otherwise you'll be doing a round-trip for each EXECUTE anyway
regardless of the batch size. Of course, if the bottleneck is somewhere
else, it won't make a difference..

--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Dan Harris 2008-06-05 15:43:06 Re: query performance question
Previous Message Simon Riggs 2008-06-05 05:17:47 Re: [PERFORM] Outer joins and equivalence