Skip site navigation (1) Skip section navigation (2)

Re: Massive Inserts Strategies

From: "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>
To: ashah <ashah(at)opuspro(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Massive Inserts Strategies
Date: 2006-03-28 17:38:27
Message-ID: 20060328173826.GP75181@pervasive.com (view raw or flat)
Thread:
Lists: pgsql-performance
Is there some other unique key you can test on?

Take a look at http://lnk.nu/cvs.distributed.net/8qt.sql lines 169-216
for an exammple. In this case we use a different method for assigning
IDs than you probably will, but the idea remains.

On Tue, Mar 28, 2006 at 10:59:49AM -0500, ashah wrote:
> I tried this solution, but ran into following problem.
> 
> The temp_table has columns (col1, col2, col3).
> 
> The original_table has columns (col0, col1, col2, col3)
> 
> Now the extra col0 on the original_table is the unique generated ID by the 
> database.
> 
> How can I make your suggestions work in that case .. ?
> 
> On Wednesday 22 March 2006 11:09 am, Jim C. Nasby wrote:
> > Load the files into a temp table and go from there...
> >
> > COPY ... FROM file;
> > UPDATE existing_table SET ... WHERE ...;
> > INSERT INTO existing_table SELECT * FROM temp_table WHERE NOT EXISTS(
> > SELECT * FROM existing_table WHERE ...)
> >
> > On Wed, Mar 22, 2006 at 10:32:10AM -0500, ashah wrote:
> > > I have a database with foreign keys enabled on the schema. I receive
> > > different files, some of them are huge. And I need to load these files in
> > > the database every night. There are several scenerios that I want to
> > > design an optimal solution for -
> > >
> > > 1. One of the file has around 80K records and I have to delete everything
> > > from the table and load this file. The provider never provides a "delta
> > > file" so I dont have a way to identify which records are already present
> > > and which are new. If I dont delete everything and insert fresh, I have
> > > to make around 80K selects to decide if the records exist or not. Now
> > > there are lot of tables that have foreign keys linked with this table so
> > > unless I disable the foreign keys, I cannot really delete anything from
> > > this table. What would be a good practise here?
> > >
> > > 2. Another file that I receive has around 150K records that I need to
> > > load in the database. Now one of the fields is logically a "foreign key"
> > > to another table, and it is linked to the parent table via a database
> > > generated unique ID instead of the actual value. But the file comes with
> > > the actual value. So once again, I have to either drop the foreign key,
> > > or make 150K selects to determine the serial ID so that the foreign key
> > > is satisfied. What would be a good strategy in this scenerio ?
> > >
> > > Please pardon my inexperience with database !
> > >
> > > Thanks,
> > > Amit
> > >
> > > ---------------------------(end of broadcast)---------------------------
> > > TIP 4: Have you searched our list archives?
> > >
> > >                http://archives.postgresql.org
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>        choose an index scan if your joining column's datatypes do not
>        match
> 

-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby(at)pervasive(dot)com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461

In response to

pgsql-performance by date

Next:From: Joshua D. DrakeDate: 2006-03-28 17:52:23
Subject: Re: Slow performance on Windows .NET and OleDb
Previous:From: Steinar H. GundersonDate: 2006-03-28 17:34:43
Subject: Re: simple join uses indexes, very slow

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group