Re: Array interface

From: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Array interface
Date: 2010-11-10 09:10:39
Message-ID: 4CDA618F.7050102@catalyst.net.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 03/11/10 08:46, Mladen Gogala wrote:
> I wrote a little Perl script, intended to test the difference that
> array insert makes with PostgreSQL. Imagine my surprise when a single
> record insert into a local database was faster than batches of 100
> records. Here are the two respective routines:

Interesting - I'm seeing a modest but repeatable improvement with bigger
array sizes (using attached program to insert pgbench_accounts) on an
older dual core AMD box with a single SATA drive running Ubuntu 10.04 i686.

rows arraysize elapsed(s)
1000000 1 161
1000000 10 115
1000000 100 110
1000000 1000 109

This is *despite* the fact that tracing the executed sql (setting
log_min_duration_statement = 0) shows that there is *no* difference (i.e
1000000 INSERT executions are performed) for each run. I'm guessing that
some perl driver overhead is being saved here.

I'd be interested to see if you can reproduce the same or similar effect.

What might also be interesting is doing each INSERT with an array-load
of bind variables appended to the VALUES clause - as this will only do 1
insert call per "array" of values.

Cheers

Mark

Attachment Content-Type Size
execinsert.pl application/x-perl 1.3 KB

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message 静安寺 2010-11-10 09:37:29 Why dose the planner select one bad scan plan.
Previous Message Grzegorz Jaśkiewicz 2010-11-10 05:36:59 Re: anti-join chosen even when slower than old plan