Skip site navigation (1) Skip section navigation (2)

Speed Question

From: Noah Silverman <noah(at)allresearch(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Speed Question
Date: 2002-12-20 22:57:28
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

We are considering switching our systems over from MySQL to Postgresql.

Speed is one of our major concerns, so before switching we've decided 
to perform some speed tests.
 From what I understand, Postgresql is NOT as fast as Mysql, but should 
be close enough.

We've installed the software and have run some basic insert, index and 
query tests that
seem ridiculously slow.  I can't help thinking that we are doing 
something wrong, or
don't have things configured for optimal performance.

We've performed these same tests on Mysql and then run dramatically 

Here's the initial performance test results and issues...

Table configuration:
speedtest( prop1 integer, prop2 integer, prop3 integer, prop4 integer);
indexes on each of the four individual property fields

Each record consists of four random integers, uniformly distributed,
between 0 and 1000.  The integers are computed in the perl script
used to populate the table, not using an SQL random() function.

Hardware configuration:  P3-500, 384MB ram, *unloaded* system.
Software configuration: Linux 2.4.20, reiserfs, standard slackware 

Issue #1:  Speed of inserts is relatively slow.  100000 inserts is 
roughly 10 minutes.  This isn't EVIL, but mysql appears to be about
ten times faster here.  Is there something we could do to the indexes
differently?  Disable transactions?  Is there a more "raw" insert, which
may not set off triggers?

Issue #2:  It doesn't appear as though multiple indexes are being used.
ie:  select count(*) from speedtest where (prop1 between 100 and 200)
and( prop2 between 100 and 200) and (prop3 between 100 and 200)
and (prop4 between 100 and 200)  formulates a query plan that only
uses one index.  The following is pasted from the 'explain select' ---

  Aggregate  (cost=17.16..17.16 rows=1 width=0)
    ->  Index Scan using p4 on speedtest  (cost=0.00..17.16 rows=1 
          Index Cond: ((prop4 >= 100) AND (prop4 <= 200))
          Filter: ((prop1 >= 100) AND (prop1 <= 200) AND (prop2 >= 100) 
(prop2 <= 200) AND (prop3 >= 100) AND (prop3 <= 200))
(4 rows)

It appears as though the index on prop4 is being used to determine a 
of records to fetch -- subsequently filtering them with the other 
Unfortunately, since the index condition matches 10% of the table (due 
the random uniform integers from 0-1000), this results in a large 
number of
record fetches and examinations the db engine must make.  This query 
at least a second to execute, whereas we would like to be able to drop 
into the sub-0.1 second range, and preferably into the millisecond 
While this would run faster on the production machines than on my 
it is still a fundamental flaw that multiple indexes aren't being 
combined to
restrict the record set to fetch.

OTOH, if we could do index combining, we could fetch 10% of 10% of 10%
of the initial 10% of records...  Resulting in a microscopic number of 
to retrieve and examine.

Can anybody give me some ideas as to what I am doing wrong???




pgsql-performance by date

Next:From: scott.marloweDate: 2002-12-20 23:23:59
Subject: Re: Speed Question
Previous:From: Josh BerkusDate: 2002-12-20 17:01:28
Subject: Re: 4G row table?

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group