Skip site navigation (1) Skip section navigation (2)

Re: poor performance of loading data

From: bangh <banghe(at)baileylink(dot)net>
To: "Zhang, Anna" <azhang(at)verisign(dot)com>, pgsql-admin(at)postgresql(dot)org
Subject: Re: poor performance of loading data
Date: 2001-12-19 22:21:06
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-admin
To create an index, to do ordering is a basic operation, it is time consumed
job. Generally there is only relative better algorithm to be used to do
ordering. Some cases, it might be pretty good, but this is sure, not for each
one. If you look the typical ordering algorithms, different initial conditions
an algorithm behaves total differently. If your object is numbers, it must be
fast to order them.

In you case, you might have no choice to create a pure numeric index.

You really have 4 indexes for this table? or you just have one index, but it is
from 4 fields.

If you really have 3 indexes, it is too many. Not only slow down when you add a
new record, also take your space.

In you case, I still suggest:

1. import from file without index
2. create index after import


"Zhang, Anna" wrote:

> I am loading data to one table. This table has 9 columns which are all text,
> and 4 indexes. If all indexes dropped before loading I am sure loading will
> be speed up, but recreate those indexes still time consuming, overall still
> a problem.
> Thanks for information.
> Anna Zhang
> -----Original Message-----
> From: bangh [mailto:banghe(at)baileylink(dot)net]
> Sent: Wednesday, December 19, 2001 4:33 PM
> To: Zhang, Anna
> Subject: Re: [ADMIN] poor performance of loading data
> Just wonder to how many tables does the data go from your text file?
> If just one table (simple case), how is your table defined?
> I think the index may be the key that will affect the speed.
> Have you defined the index in that table? As some guys said, if the index is
> numeric, it could be better than chars.
> Banghe
> "Zhang, Anna" wrote:
> > I just installed Postgres 7.1.3 on my Red Hat 7.2 linux box. We are doing
> > research to see how postgres doing, I used copy utility to import data
> from
> > a text file which contains 32 mils rows, it has been 26 hours passed, but
> > still running. My question is how postgres handles such data loading? it
> > commited every row? or commit point is adjustable? How? Does postgres
> > provide direct load to disk files like oracle? Other ways to speed up? If
> > loading performance can't be improved significantly, we have to go back to
> > oracle. Anybody can help? Thanks!
> >
> > Anna Zhang
> >
> > ---------------------------(end of broadcast)---------------------------
> > TIP 1: subscribe and unsubscribe commands go to majordomo(at)postgresql(dot)org

pgsql-admin by date

Next:From: Tom LaneDate: 2001-12-19 23:04:31
Subject: Re: Tuning questions..
Previous:From: Michael T. HalliganDate: 2001-12-19 21:28:34
Subject: Tuning questions..

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group