PG generally comes with very basic default settings, one *start* maybe
this page for you
Then obviously you will need to work though your query plans and iterate.
Shadkam Islam wrote:
> Hi All,
> We are having a table whose data we need to bucketize and show. This is
> a continuously growing table (archival is a way to trim it to size).
> We are facing 2 issues here:
> 1. When the records in the table are in the range of 10K, it works fine
> for some time after starting postgres server. But as time passes, the
> entire machine becomes slower and slower - to the extent that we need to
> go for a restart. Though taskmgr does not show any process consuming
> extra-ordinary amount of CPU / Memory. After a restart of postgres
> server, things come back to normal. What may be going wrong here?
> 2. When the records cross 200K, the queries (even "select count(*) from
> _TABLE_") start taking minutes, and sometimes does not return back at
> all. We were previously using MySql and at least this query used to work
> OK there. [Our queries are of the form "select sum(col1), sum(col2),
> count(col3) ... where .... group by ... " ]. Any suggestions ...
> Below is the tuning parameter changes thet we did with the help from
> We are starting postgres with the options [-o "-B 4096"], later we added
> a "-S 1024" as well - without any visible improvement.
> Machine has 1GB RAM.
> ---------------------------(end of broadcast)---------------------------
> TIP 5: don't forget to increase your free space map settings
In response to
pgsql-performance by date
|Next:||From: Usama Munir Dar||Date: 2007-11-28 16:20:46|
|Subject: Re: Training Recommendations|
|Previous:||From: Matthew||Date: 2007-11-28 16:08:51|
|Subject: Re: GiST indexing tuples |