Re: postgres slowdown question

From: John A Meinel <john(at)arbash-meinel(dot)com>
To: Shachindra Agarwal <sagarwal(at)metallect(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: postgres slowdown question
Date: 2005-04-20 20:48:16
Message-ID: 4266C010.5090003@arbash-meinel.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Shachindra Agarwal wrote:

> Dear Postgres Masters:
>
> We are using postgres 7.4 in our java application on RedHat linux. The
> Java application connects to Postgres via JDBC. The application goes
> through a ‘discovery’ phase, whereas it adds large amount of data into
> postgres. Typically, we are adding about a million records in various
> tables. The application also issues multiple queries to the database
> at the same time. We do not delete any records during the discovery
> phase. Both the java application and the postgres are installed on the
> same machine.
>
> At the beginning, the application is able to add in the order of 100
> record per minute. Gradually (after several hours), it slows down to
> less than 10 records per minute. At this time, postgres processes take
> between 80-99% of CPU. When we reindex the database, the speed bumps
> up to about 30 records per minute. Now, postgres server takes between
> 50-70% CPU.
>
> We have the following in the postgresql.conf :
>
> max_fsm_pages = 500000
>
> fsync = false
>
> We certainly can not live with this kind of performance. I believe
> postgres should be able to handle much larger datasets but I can not
> point my finger as to what are we doing wrong. Can somebody please
> point me to the right direction.
>
> With kind regards,
>
> -- Shachindra Agarwal.
>
A few questions first. How are you loading the data? Are you using
INSERT or COPY? Are you using a transaction, or are you autocommitting
each row?

You really need a transaction, and preferably use COPY. Both can help
performance a lot. (In some of the tests, single row inserts can be
10-100x slower than doing it in bulk.)

Also, it sounds like you have a foreign key issue. That as things fill
up, the foreign key reference checks are slowing you down.
Are you using ANALYZE as you go? A lot of times when you only have <1000
rows a sequential scan is faster than using an index, and if you don't
inform postgres that you have more rows, it might still use the old seqscan.

There are other possibilities, but it would be nice to know about your
table layout, and possibly an EXPLAIN ANALYZE of the inserts that are
going slow.

John
=:->

PS> I don't know if JDBC supports COPY, but it certainly should support
transactions.

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tom Lane 2005-04-20 20:51:10 Re: Disk filling, CPU filling, renegade inserts and deletes?
Previous Message Richard Plotkin 2005-04-20 20:43:50 Re: Disk filling, CPU filling, renegade inserts and deletes?