Skip site navigation (1) Skip section navigation (2)

Re: Performace Optimization for Dummies

From: "Carlo Stonebanks" <stonec(dot)register(at)sympatico(dot)ca>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performace Optimization for Dummies
Date: 2006-09-29 04:37:37
Message-ID: efi7u5$265b$ (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
> imo, the key to high performance big data movements in postgresql is
> mastering sql and pl/pgsql, especially the latter.  once you get good
> at it, your net time of copy+plpgsql is going to be less than
> insert+tcl.

If this implies bulk inserts, I'm afraid I have to consider something else. 
Any data that has been imported and dedpulicated has to be placed back into 
the database so that it can be available for the next imported row (there 
are currently 16 tables affected, and more to come). If I was to cache all 
inserts into a seperate resource, then I would have to search 32 tables - 
the local pending resources, as well as the data still in the system. I am 
not even mentioning that imports do not just insert rows, they could just 
rows, adding their own complexity. 

In response to

pgsql-performance by date

Next:From: Carlo StonebanksDate: 2006-09-29 04:46:54
Subject: Re: Performace Optimization for Dummies
Previous:From: Carlo StonebanksDate: 2006-09-29 04:30:23
Subject: Re: Performace Optimization for Dummies

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group