I am still fiddling around with my "big" database.
CPU: 1,6 MHz (cache: 256 Kb)
single disc: 120 GB :-(
I have a query that joins to relatively large tables (10 - 15 Mio rows),
or part of these tables (explain analyze: rows=46849) respectively.
long story short:
allover cost estimated in pages by explain is:
actual time is from explain analyze is:
most of it is consumed by a nested loop (surprise!)
this is the respective output:
Sort Key: disease.disease_name, disease_occurrences.sentence_id
-> Nested Loop (cost=0.00..6922.38 rows=98 width=64) (actual
time=61.49..275047.46 rows=18910 loops=1)
-> Nested Loop (cost=0.00..6333.23 rows=98 width=28) (actual
time=61.42..274313.87 rows=18910 loops=1)
-> Nested Loop (cost=0.00..5894.04 rows=64 width=16) (actual
time=32.00..120617.26 rows=46849 loops=1)
I tried to tweak the conf settings, but I think I already reached quite
a good value concerning shared buffers and sort mem. the database is
vacuum full analyzed. indexes seem fine.
could one of you smart guys point me into a direction I might not have
considered? - I know that the hardware is the minimum. nevertheless - if
you have suggestions what exactely to add to the hardware to boost the
database up (more RAM or more discs - even a RAID) - this would be a
good argument for my boss.
Thank you a lot
pgsql-performance by date
|Next:||From: Bruce Momjian||Date: 2003-02-13 18:49:06|
|Subject: Re: Changing the default configuration (was Re:|
|Previous:||From: Bruce Momjian||Date: 2003-02-13 17:10:56|
|Subject: Re: [HACKERS] Changing the default configuration (was Re: |