Skip site navigation (1) Skip section navigation (2)

Insert rate drops as table grows

From: jao(at)geophile(dot)com
To: pgsql-general(at)postgresql(dot)org
Subject: Insert rate drops as table grows
Date: 2006-01-30 22:47:42
Message-ID: 20060130174742.svh9mhkqowssoswk@geophile.com (view raw or flat)
Thread:
Lists: pgsql-general
I have this table and index:

    create table t(id int, hash int);
    create index idx_t on t(hash);

The value of the hash column, which is indexed, is a pseudo-random
number. I load the table and measure the time per insert.

What I've observed is that inserts slow down as the table grows to
1,000,000 records. Observing the pg_stat* tables, I see that the data
page reads per unit time stay steady, but that index page reads grow
quickly, (shared_buffers was set to 2000).

I'm guessing that this is because inserts will append to data pages,
but there is a random probe to the btree to maintain the index.

This is a test program, but I'm wondering about my application that
will need row counts much beyond 1,000,000.

Questions:

- Am I reasoning about the data and index costs correctly?

- In order to keep insert times from dropping too much, do I simply
need to increase shared_buffers in order to accomodate more of the
index?

- Assuming I'm willing to buy enough RAM and set shmmax high enough,
are there practical limits on how big shared_buffers can be that will
limit how far I can pursue such a strategy?


Jack Orenstein



Responses

pgsql-general by date

Next:From: Rick GiggerDate: 2006-01-30 23:29:15
Subject: Re: incremental backups
Previous:From: Héjja AttilaDate: 2006-01-30 22:41:37
Subject: WinXP install question

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group