Skip site navigation (1) Skip section navigation (2)

Re: multiple table scan performance

From: Samuel Gendler <sgendler(at)ideasculptor(dot)com>
To: Marti Raudsepp <marti(at)juffo(dot)org>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: multiple table scan performance
Date: 2011-03-30 00:12:24
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On Tue, Mar 29, 2011 at 5:05 PM, Marti Raudsepp <marti(at)juffo(dot)org> wrote:

> On Wed, Mar 30, 2011 at 01:16, Samuel Gendler <sgendler(at)ideasculptor(dot)com>
> wrote:
> You can trick Postgres (8.3.x and newer) into doing it in parallel
> anyway: open 3 separate database connections and issue each of these
> 'INSERT INTO ... SELECT' parts separately.  This way all the queries
> should execute in about 1/3 the time, compared to running them in one
> session or with UNION ALL.

That's a good idea, but forces a lot of infrastructural change on me.  I'm
inserting into a temp table, then deleting everything from another table
before copying over.  I could insert into an ordinary table, but then I've
got to deal with ensuring that everything is properly cleaned up, etc.
 Since nothing is actually blocked, waiting for the queries to return, I
think I'll just let them churn for now. It won't make much difference in
production, where the whole table will fit easily into cache.  I just wanted
things to be faster in my dev environment.

> Regards,
> Marti

In response to

pgsql-performance by date

Next:From: Lars FeistnerDate: 2011-03-30 07:35:08
Subject: Re: very long updates very small tables
Previous:From: Marti RaudseppDate: 2011-03-30 00:05:08
Subject: Re: multiple table scan performance

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group