Re: multiple table scan performance

From: Samuel Gendler <sgendler(at)ideasculptor(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: multiple table scan performance
Date: 2011-03-30 00:12:24
Message-ID: AANLkTimHL7H=aLnQftKeFLudpeY9C5qjfNV__XSrcxLH@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Tue, Mar 29, 2011 at 5:05 PM, Marti Raudsepp <marti(at)juffo(dot)org> wrote:

> On Wed, Mar 30, 2011 at 01:16, Samuel Gendler <sgendler(at)ideasculptor(dot)com>
> wrote:
>
> You can trick Postgres (8.3.x and newer) into doing it in parallel
> anyway: open 3 separate database connections and issue each of these
> 'INSERT INTO ... SELECT' parts separately. This way all the queries
> should execute in about 1/3 the time, compared to running them in one
> session or with UNION ALL.
>

That's a good idea, but forces a lot of infrastructural change on me. I'm
inserting into a temp table, then deleting everything from another table
before copying over. I could insert into an ordinary table, but then I've
got to deal with ensuring that everything is properly cleaned up, etc.
Since nothing is actually blocked, waiting for the queries to return, I
think I'll just let them churn for now. It won't make much difference in
production, where the whole table will fit easily into cache. I just wanted
things to be faster in my dev environment.

>
> Regards,
> Marti
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Lars Feistner 2011-03-30 07:35:08 Re: very long updates very small tables
Previous Message Marti Raudsepp 2011-03-30 00:05:08 Re: multiple table scan performance