Re: High cpu usage after many inserts

From: Jordan Tomkinson <jordan(at)moodle(dot)com>
To: Greg Smith <gsmith(at)gregsmith(dot)com>
Cc: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: High cpu usage after many inserts
Date: 2009-02-24 06:26:29
Message-ID: 6de2f13b0902232226m775d38dbq2e4206e72f20b8d4@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Feb 24, 2009 at 11:42 AM, Greg Smith <gsmith(at)gregsmith(dot)com> wrote:

> On Mon, 23 Feb 2009, Scott Marlowe wrote:
>
> well that's pretty normal as the indexes grow large enough to not fit in
>> cache, then not fit in memory, etc...
>>
>
> Right, the useful thing to do in this case is to take a look at how big all
> the relations (tables, indexes) involved are at each of the steps in the
> process. The script at http://wiki.postgresql.org/wiki/Disk_Usage will
> show you that. That will give some feedback on whether the vacuum/reindex
> methodology is really doing what you expect, and it will also let you
> compare the size of the table/index with how much RAM is in the system.
>
> Have you done any tuning of the postgresql.conf file? If you haven't
> increased shared_buffers substantially, you could be seeing buffer cache
> churn as the CPU spends all its time shuffling buffers between PostgreSQL
> and the OS once the working set involved exceeds around 32MB.
>
> Shouldn't someone have ranted about RAID-5 by this point in the thread?
>
> --
> * Greg Smith gsmith(at)gregsmith(dot)com http://www.gregsmith.com Baltimore, MD
>

Right, i have done some more testing and I think its pretty conclusive.

1. Start with a known good copy of the database (some 3gb in size)
2. Run the Jmeter tests until ~7000 new rows are inserted equally over 3
tables. At this point performance goes to hell
3. delete the ~7000 rows from the db without re-indexing, (manually)
analyzing or anything of the sort.
4. performance instantly returns to that of before the tests began
(optimum).

So im thinking as Scott said it could be buffer/cache size filling up?
Here is my postgresql.conf, perhaps someone can make a few pointers.
The hardware is a Quad Xeon 2.0GHZ with 8GB RAM and 15K RPM SAS drives in
RAID 5 (i know raid 5, dont tell me)

max_connections = 400
shared_buffers = 2048MB
temp_buffers = 8MB
max_prepared_transactions = 10
work_mem = 8MB
maintenance_work_mem = 128MB
max_stack_depth = 4MB
vacuum_cost_delay = 0
vacuum_cost_page_hit = 1
vacuum_cost_page_miss = 10
vacuum_cost_page_dirty = 20
vacuum_cost_limit = 200
bgwriter_delay = 200ms
bgwriter_lru_maxpages = 100
bgwriter_lru_multiplier = 2.0
fsync = on
synchronous_commit = on
wal_sync_method = fsync
full_page_writes = on
wal_buffers = 128kB
wal_writer_delay = 200ms
commit_delay = 0
commit_siblings = 5
log_destination = 'stderr'
logging_collector = on
log_directory = 'pg_log'
log_truncate_on_rotation = on
log_rotation_age = 1d
log_rotation_size = 0
track_counts = on
log_parser_stats = off
log_planner_stats = off
log_executor_stats = off
log_statement_stats = off
autovacuum = on
log_autovacuum_min_duration = 0
autovacuum_max_workers = 3
autovacuum_naptime = 1min
autovacuum_vacuum_threshold = 50
autovacuum_analyze_threshold = 50
autovacuum_vacuum_scale_factor = 0.2
autovacuum_analyze_scale_factor = 0.1
autovacuum_freeze_max_age = 200000000
autovacuum_vacuum_cost_delay = 20
autovacuum_vacuum_cost_limit = -1
datestyle = 'iso, mdy'
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8'
default_text_search_config = 'pg_catalog.english'
escape_string_warning = off

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Craig Ringer 2009-02-24 06:54:20 Re: javascript and postgres
Previous Message Scott Marlowe 2009-02-24 05:22:46 Re: surprising results with random()