Re: slow select in big table

From: Abbas <abbas(dot)dba(at)gmail(dot)com>
To: rafalak <rafalak(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: slow select in big table
Date: 2009-04-03 02:11:38
Message-ID: bb2cdf790904021911l7609b03bk53c802b7a82bb414@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Fri, Apr 3, 2009 at 2:18 AM, rafalak <rafalak(at)gmail(dot)com> wrote:

> Hello i have big table
> 80mln records, ~6GB data, 2columns (int, int)
>
> if query
> select count(col1) from tab where col2=1234;
> return low records (1-10) time is good 30-40ms
> but when records is >1000 time is >12s
>
>
> How to increse performace ?
>
>
> my postgresql.conf
> shared_buffers = 810MB
> temp_buffers = 128MB
> work_mem = 512MB
> maintenance_work_mem = 256MB
> max_stack_depth = 7MB
> effective_cache_size = 800MB
>
>
> db 8.3.7
> server, atlon dual-core 2,0Ghz, 2GB RAM, SATA
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
Is the table has indexes?
Decreasing the work_mem also increase performance.
Monitor these changes by explain the query plan.

Regards,
Abbas.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Craig Ringer 2009-04-03 02:19:04 Re: [GENERAL] Re: [GENERAL] Re: [GENERAL] ERROR: XX001: could not read block 2354 of relation...
Previous Message Sam Mason 2009-04-03 01:05:19 Re: reducing IO and memory usage: sending the content of a table to multiple files