Re: DBD::Pg timings

From: "Jason E(dot) Stewart" <jason(at)openinformatics(dot)com>
To: "Pete Leonard" <pete(at)hero(dot)com>
Cc: dbi-dev(at)perl(dot)org, pgsql-interfaces(at)postgresql(dot)org
Subject: Re: DBD::Pg timings
Date: 2002-11-21 17:06:20
Message-ID: 87u1iahmyb.fsf@openinformatics.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-interfaces

Hey Pete,

Ah, dammit, thanks for the advice.

This is no longer a DBI question, so I appologize for posting it back
to the list (but I thought it would be nice to get this archived for
google's sake).

Isn't there a nicer way to turn off indexing during a big insert other
than dropping all the indexes?

Cheers,
jas.

"Pete Leonard" <pete(at)hero(dot)com> writes:

> Remove all indices on the tables you're inserting on, and add them once
> you're done.
>
> I was in the same boat, inserting 1.5M records into a simple table - it
> was crawling along at 10 rows/sec before I did this, 100 rows/sec
> afterwards. And re-creating the indicies only takes a couple of minutes
> after the fact.
>
> On 21 Nov 2002, Jason E. Stewart wrote:
>
> > Hey all,
> >
> > I'd be grateful if someone could give me a reality check. I have 250k
> > rows I want to insert into Postgres using a simple Perl script and
> > it's taking *forever*. According to my simple timings, It seems to be
> > only capable of handling about 5,000 rows/hr!!! This seems
> > ridiculous. This is running on a pretty speedy dual processor P4, and
> > it doesn't seem to have any trouble at all with big selects.

Browse pgsql-interfaces by date

  From Date Subject
Next Message William Brennan 2002-11-21 18:37:07 Handling SQL errors using libpq
Previous Message David Wheeler 2002-11-21 16:13:52 Re: DBD::PostgreSQL