Skip site navigation (1) Skip section navigation (2)

Multiple insert performance trick or performance misunderstanding?

From: Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Multiple insert performance trick or performance misunderstanding?
Date: 2005-09-24 20:51:16
Message-ID: dh4e88$25qe$1@news.hub.org (view raw or flat)
Thread:
Lists: pgsql-performance
When I need to insert a few hundred or thousand things in
a table from a 3-tier application, it seems I'm much better
off creating a big string of semicolon separated insert
statements rather than sending them one at a time - even
when I use the obvious things like wrapping the statements
in a transaction and using the library's prepared statements.



I tried both Ruby/DBI and C#/Npgsql; and in both cases
sets of inserts that took 3 seconds when run individually
took about 0.7 seconds when concatenated together.

Is it expected that I'd be better off sending big
concatenated strings like
   "insert into tbl (c1,c2) values (v1,v2);insert into tbl (c1,c2) values (v3,v4);..."
instead of sending them one at a time?





db.ExecuteSQL("BEGIN");
sql = new System.Text.StringBulder(10000);
for ([a lot of data elements]) {
   sql.Append(
      "insert into user_point_features (col1,col2)"+
      " values ("                      +obj.val1 +","+obj.val2+");"
   );
}
db.ExecuteSQL(sql.ToString());
db.ExecuteSQL("COMMIT");

Responses

pgsql-performance by date

Next:From: Tom LaneDate: 2005-09-24 21:15:41
Subject: Re: Multiple insert performance trick or performance misunderstanding?
Previous:From: PFCDate: 2005-09-24 16:27:36
Subject: Re: Advice on RAID card

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group