Re: How to improve insert speed with index on text column

From: Claudio Freire <klaussfreire(at)gmail(dot)com>
To: Saurabh <saurabh(dot)b85(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: How to improve insert speed with index on text column
Date: 2012-01-30 18:20:28
Message-ID: CAGTBQpapE8S2SzaxfkqiBR3KxpNCUanqzO9x3aHDeUExQGiD5A@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, Jan 30, 2012 at 2:46 PM, Saurabh <saurabh(dot)b85(at)gmail(dot)com> wrote:
> max_connections = 100
> shared_buffers = 32MB
> wal_buffers = 1024KB
> checkpoint_segments = 3

That's a default config isn't it?

You'd do well to try and optimize it for your system. The defaults are
really, reeallly conservative.

You should also consider normalizing. I'm assuming company_name could
be company_id ? (ie: each will have many rows). Otherwise I cannot see
how you'd expect to be *constantly* inserting millions of rows. If
it's a one-time initialization thing, just drop the indices and
recreate them as you've been suggested. If you create new records all
the time, I'd bet you'll also have many rows with the same
company_name, so normalizing would be a clear win.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Andy Colson 2012-01-30 18:33:00 Re: How to improve insert speed with index on text column
Previous Message Saurabh 2012-01-30 17:46:21 Re: How to improve insert speed with index on text column