Skip site navigation (1) Skip section navigation (2)

Re: Berkeley DB...

From: Karel Zak <zakkr(at)zf(dot)jcu(dot)cz>
To: Mike Mascari <mascarm(at)mascari(dot)com>
Cc: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Hannu Krosing <hannu(at)tm(dot)ee>, Matthias Urlichs <smurf(at)noris(dot)de>, pgsql-hackers(at)postgresql(dot)org
Subject: Re: Berkeley DB...
Date: 2000-05-29 14:57:04
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-hackers
> It will be interesting to see the speed differences between the
> 100,000 inserts above and those which have been PREPARE'd using
> Karel Zak's PREPARE patch. Perhaps a generic query cache could be

My test:

	postmaster:	-F -B 2000	
	rows:		100,000 
	table:		create table (data text);
	data:		37B for eache line
	--- all is in one transaction

	native insert:		66.522s
	prepared insert:	59.431s	    - 11% faster	

IMHO parsing/optimizing is relative easy for a simple INSERT.
The query (plan) cache will probably save time for complicated SELECTs 
with functions ...etc. (like query that for parsing need look at to system
tables). For example:

	insert into tab values ('some data' || 'somedata' || 'some data');

	native insert:		91.787s
	prepared insert:	45.077s     - 50% faster

	(Note: This second test was faster, because I stop X-server and
	postgres had more memory :-)

 The best way for large and simple data inserting is (forever) COPY, not
exist faster way. 

 pg's path(s) of query:
 native insert:		parser -> planner -> executor -> storage
 prepared insert:	parser (for execute stmt) -> executor -> storage
 copy:			utils (copy) -> storage

> amongst other things). I'm looking forward to when the 7.1 branch
> occurs... :-)

 I too.


In response to

pgsql-hackers by date

Next:From: Jan WieckDate: 2000-05-29 15:03:10
Subject: Applying TOAST to CURRENT
Previous:From: Bruce MomjianDate: 2000-05-29 14:53:52
Subject: CVS FAQ on web page

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group