In response to Laurent Manchon <lmanchon(at)univ-montp2(dot)fr>:
> I have a slow response of my PostgreSQL database 7.4 using this query below
> on a table with 800000 rows:
> select count(*)from tbl;
> PostgreSQL return result in 28 sec every time.
> although MS-SQL return result in 0.02 sec every time.
> My server is a DELL PowerEdge 2600 with bi-processor Xeon at 3.2 Ghz
> with 3GBytes RAM
While there's truth in everything that's been said by others, the query
should not take _that_ long. I just tried a count(*) on a table with
460,000 rows, and it took less than a second. count(*) in PostgreSQL
is not likely to compare to most other RDBMS for the reasons others have
stated, but counting 800,000 rows shouldn't take 28 seconds.
The standard question applies: have you vacuumed recently?
> My PostgreSQL Conf is
> log_connections = yes
> syslog = 2
> effective_cache_size = 50000
> sort_mem = 10000
> max_connections = 200
> shared_buffers = 3000
> vacuum_mem = 32000
> wal_buffers = 8
> max_fsm_pages = 2000
> max_fsm_relations = 100
> Can you tell me is there a way to enhence performance ?
On our 4G machines, we use shared_buffers=240000 (which equates to about
2G). The only reason I don't set it higher is that FreeBSD has a limit on
shared memory of 2G.
The caveat here is that I'm running a mix of 8.1 and 8.2. There have been
significant improvements in both the usage of shared memory, and the
optimization of count(*) since 7.4, so the first suggestion I have is to
upgrade your installation.
Collaborative Fusion Inc.
In response to
- slow result at 2007-01-23 10:34:52 from Laurent Manchon
pgsql-performance by date
|Next:||From: Ron||Date: 2007-01-23 14:43:35|
|Subject: Re: slow result|
|Previous:||From: Tobias Brox||Date: 2007-01-23 13:35:48|
|Subject: Re: extract(field from timestamp) vs date dimension|