Re: stored proc and inserting hundreds of thousands of rows

From: Joel Reymont <joelr1(at)gmail(dot)com>
To: Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: stored proc and inserting hundreds of thousands of rows
Date: 2011-04-30 17:58:50
Message-ID: 4881277622113933088@unknownmsgid
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Calculating distance involves giving an array of 150 float8 to a pgsql
function, then calling a C function 2 million times (at the moment),
giving it two arrays of 150 float8.

Just calculating distance for 2 million rows and extracting the
distance takes less than a second. I think that includes sorting by
distance and sending 100 rows to the client.

Are you suggesting eliminating the physical linking and calculating
matching documents on the fly?

Is there a way to speed up my C function by giving it all the float
arrays, calling it once and having it return a set of matches? Would
this be faster than calling it from a select, once for each array?

Sent from my comfortable recliner

On 30/04/2011, at 18:28, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> wrote:

> Joel Reymont <joelr1(at)gmail(dot)com> wrote:
>
>> We have 2 million documents now and linking an ad to all of them
>> takes 5 minutes on my top-of-the-line SSD MacBook Pro.
>
> How long does it take to run just the SELECT part of the INSERT by
> itself?
>
> -Kevin

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Pierre C 2011-04-30 18:04:51 Re: stored proc and inserting hundreds of thousands of rows
Previous Message Kevin Grittner 2011-04-30 17:27:55 Re: stored proc and inserting hundreds of thousands of rows