Skip site navigation (1) Skip section navigation (2)

Re: How to improve insert speed with index on text column

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Saurabh <saurabh(dot)b85(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: How to improve insert speed with index on text column
Date: 2012-01-30 18:20:28
Message-ID: CAGTBQpapE8S2SzaxfkqiBR3KxpNCUanqzO9x3aHDeUExQGiD5A@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
On Mon, Jan 30, 2012 at 2:46 PM, Saurabh <saurabh(dot)b85(at)gmail(dot)com> wrote:
> max_connections = 100
> shared_buffers = 32MB
> wal_buffers = 1024KB
> checkpoint_segments = 3

That's a default config isn't it?

You'd do well to try and optimize it for your system. The defaults are
really, reeallly conservative.

You should also consider normalizing. I'm assuming company_name could
be company_id ? (ie: each will have many rows). Otherwise I cannot see
how you'd expect to be *constantly* inserting millions of rows. If
it's a one-time initialization thing, just drop the indices and
recreate them as you've been suggested. If you create new records all
the time, I'd bet you'll also have many rows with the same
company_name, so normalizing would be a clear win.

In response to

pgsql-performance by date

Next:From: Andy ColsonDate: 2012-01-30 18:33:00
Subject: Re: How to improve insert speed with index on text column
Previous:From: SaurabhDate: 2012-01-30 17:46:21
Subject: Re: How to improve insert speed with index on text column

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group