andrew klassen <aptklassen(at)yahoo(dot)com> writes:
> I am using the c-library interface and for these particular transactions
> I preload PREPARE statements. Then as I get requests, I issue a BEGIN,
> followed by at most 300 EXECUTES and then a COMMIT. That is the
> general scenario. What value beyond 300 should I try?
Well, you could try numbers in the low thousands, but you'll probably
get only incremental improvement.
> Also, how might COPY (which involves file I/O) improve the
> above scenario?
COPY needn't involve file I/O. If you are using libpq you can push
anything you want into PQputCopyData. This would involve formatting
the data according to COPY's escaping rules, which are rather different
from straight SQL, but I doubt it'd be a huge amount of code. Seems
regards, tom lane
In response to
pgsql-performance by date
|Next:||From: Greg Smith||Date: 2008-06-05 00:32:29|
|Subject: Re: RAM / Disk ratio, any rule? |
|Previous:||From: PFC||Date: 2008-06-04 21:50:09|
|Subject: Re: insert/update tps slow with indices on table > 1M rows|