From: | Dan Harris <fbsd(at)drivefaster(dot)net> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: slow joining very large table to smaller ones |
Date: | 2005-07-15 15:21:31 |
Message-ID: | EF082D2E-96A0-4C63-A8FC-6EF1D3152A04@drivefaster.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Jul 15, 2005, at 9:09 AM, Dan Harris wrote:
>
> On Jul 14, 2005, at 10:12 PM, John A Meinel wrote:
>
>>
>> My biggest question is why the planner things the Nested Loop
>> would be
>> so expensive.
>> Have you tuned any of the parameters? It seems like something is
>> out of
>> whack. (cpu_tuple_cost, random_page_cost, etc...)
>>
>>
>
> here's some of my postgresql.conf. Feel free to blast me if I did
> something idiotic here.
>
> shared_buffers = 50000
> effective_cache_size = 1348000
> random_page_cost = 3
> work_mem = 512000
> max_fsm_pages = 80000
> log_min_duration_statement = 60000
> fsync = true ( not sure if I'm daring enough to run without this )
> wal_buffers = 1000
> checkpoint_segments = 64
> checkpoint_timeout = 3000
>
>
> #---- FOR PG_AUTOVACUUM --#
> stats_command_string = true
> stats_row_level = true
>
Sorry, I forgot to re-post my hardware specs.
HP DL585
4 x 2.2 GHz Opteron
12GB RAM
SmartArray RAID controller, 1GB hardware cache, 4x73GB 10k SCSI in
RAID 0+1
ext2 filesystem
Also, there are 30 databases on the machine, 27 of them are identical
schemas.
From | Date | Subject | |
---|---|---|---|
Next Message | Dennis | 2005-07-15 16:03:12 | Re: performance problems ... 100 cpu utilization |
Previous Message | Dan Harris | 2005-07-15 15:09:37 | Re: slow joining very large table to smaller ones |