Skip site navigation (1) Skip section navigation (2)

Re: insert/update tps slow with indices on table > 1M rows

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: andrew klassen <aptklassen(at)yahoo(dot)com>
Cc: James Mansion <james(at)mansionfamily(dot)plus(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: insert/update tps slow with indices on table > 1M rows
Date: 2008-06-04 22:52:19
Message-ID: 6656.1212619939@sss.pgh.pa.us (view raw or flat)
Thread:
Lists: pgsql-performance
andrew klassen <aptklassen(at)yahoo(dot)com> writes:
> I am using the c-library interface and for these particular transactions
> I preload PREPARE statements. Then as I get requests, I issue a BEGIN, 
> followed by at most 300 EXECUTES and then a COMMIT. That is the
> general scenario. What value beyond 300 should I try? 

Well, you could try numbers in the low thousands, but you'll probably
get only incremental improvement.

> Also, how might COPY (which involves file I/O) improve the 
> above scenario? 

COPY needn't involve file I/O.  If you are using libpq you can push
anything you want into PQputCopyData.  This would involve formatting
the data according to COPY's escaping rules, which are rather different
from straight SQL, but I doubt it'd be a huge amount of code.  Seems
worth trying.

			regards, tom lane

In response to

pgsql-performance by date

Next:From: Greg SmithDate: 2008-06-05 00:32:29
Subject: Re: RAM / Disk ratio, any rule?
Previous:From: PFCDate: 2008-06-04 21:50:09
Subject: Re: insert/update tps slow with indices on table > 1M rows

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group