Re: What executes faster?

From: "Reinoud van Leeuwen" <reinoud(at)xs4all(dot)nl>
To: <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: What executes faster?
Date: 2001-10-01 09:39:45
Message-ID: 24970.194.109.0.126.1001929185.squirrel@webmail.xs4all.nl
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> [HACKERS] What executes faster?
> Now that I've found the solution for my duplicate key problem,
> I'm wondering what executes faster when I have to check for
> duplicates.
> 1. try to update
> if no row affected -> do the insert
> else done
> 2. do a select
> if row not found -> do the insert
> else do the update
> Another idea I'm thinking about:
> I'm doing the check for duplicate key by myself now.
> Aren't insert commands running faster, if I replace
> an unique index by a not-unique index.

I have solved an almost similar problem.
I have a large table (about 8 milion rows) called radius and a table with
updates and newlines called radiusupdate.
The first thing I tried was 2 queries:
update radius
from radiusupdate
where radius.pk = radiusupdate.pk

insert into radius
select *
from radiusupdate RU
where RU.pk not in (select pk from radius)

But the second one is obviously not very fast. A "not in" never is... So I
now do things just a little bit different. I added a field to the table
radiusupdate called "newline". It is default set to true. Then I replace
the second query by these two:

update radiusupdate
set newline = false
from radius R
where radiusupdate.pk = radius.pk

insert into radius
select *
from radiusupdate RU
where newline = true

This is a lot faster in my case....

Reinoud

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Janardhana Reddy 2001-10-01 09:57:04 Re: PERFORMANCE IMPROVEMENT by mapping WAL FILES
Previous Message Karel Zak 2001-10-01 08:23:13 Re: Spinlock performance improvement proposal