Re: Performace Optimization for Dummies

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performace Optimization for Dummies
Date: 2006-09-28 20:55:57
Message-ID: b42b73150609281355w5df1cb17qb6459ff2179ba9c0@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 9/28/06, Carlo Stonebanks <stonec(dot)register(at)sympatico(dot)ca> wrote:
> The deduplication process requires so many programmed procedures that it
> runs on the client. Most of the de-dupe lookups are not "straight" lookups,
> but calculated ones emplying fuzzy logic. This is because we cannot dictate
> the format of our input data and must deduplicate with what we get.
>
> This was one of the reasons why I went with PostgreSQL in the first place,
> because of the server-side programming options. However, I saw incredible
> performance hits when running processes on the server and I partially
> abandoned the idea (some custom-buiilt name-comparison functions still run
> on the server).

imo, the key to high performance big data movements in postgresql is
mastering sql and pl/pgsql, especially the latter. once you get good
at it, your net time of copy+plpgsql is going to be less than
insert+tcl.

merlin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Steve Atkins 2006-09-28 21:04:21 Re: Performace Optimization for Dummies
Previous Message Andrew Sullivan 2006-09-28 20:17:10 Re: slow queue-like empty table