Re: PostgreSQL insert speed tests

From: Sezai YILMAZ <sezai(dot)yilmaz(at)pro-g(dot)com(dot)tr>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: PostgreSQL insert speed tests
Date: 2004-02-27 14:30:32
Message-ID: 403F5488.5050705@pro-g.com.tr
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Sezai YILMAZ wrote:

> create index agentid_ndx on logs using hash (agentid);
> create index ownerid_ndx on logs using hash (ownerid);
> create index hostid_ndx on logs using hash (hostid);
> ------------------------------------------------------------
> speed for speed for
> # of EXISTING RECORDS PostgreSQL 7.3.4 PostgreSQL 7.4.1
> =========================================================================
>
> 0 initial records 1086 rows/s 1324 rows/s
> 200.000 initial records 781 rows/s 893 rows/s
> 400.000 initial records 576 rows/s 213 rows/s
> 600.000 initial records 419 rows/s 200 rows/s
> 800.000 initial records 408 rows/s not tested because of bad
> results

I changed the three hash indexes to btree.

The performance is increased about 2 times (in PostgreSQL 7.3.4 1905
rows/s).

Concurrent inserts now work.

Changed indexes are more suitable for hash type. Because, there is no
ordering on them, instead exact values are matched which is more natural
for hash type of indexes. But hash indexes has possible dead lock
problems on multiple concurrent inserts. I think I can live with btree
indexes. They work better. :-)

-sezai

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bruno Wolff III 2004-02-27 14:37:50 Re: check for user validity
Previous Message Michael Chaney 2004-02-27 14:25:33 Re: correlated delete with "in" and "left outer join"