Re: improve performance in a big table

From: "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com>
To: "olivier boissard" <olivier(dot)boissard(at)cerene(dot)fr>
Cc: A(dot)Burbello <burbello3000(at)yahoo(dot)com(dot)br>, pgsql-admin(at)postgresql(dot)org
Subject: Re: improve performance in a big table
Date: 2007-12-13 18:04:47
Message-ID: dcc563d10712131004m4d81b878ra6c5d7d5fb1e82e6@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

On Dec 13, 2007 7:15 AM, olivier boissard <olivier(dot)boissard(at)cerene(dot)fr> wrote:
> A.Burbello a écrit :
> > Hi people,
> >
> > I have a case and I got a doubt what should be the
> > best ways.
> >
> > One table has more than 150 million of rows, and I
> > thought that could divided by state.
> > For each row has person ID, state and other
> > informations, but the search would be done only by
> > person ID (number column).
> >
> > I can improve the query by putting index in that
> > column, but is there any other ways?
>
> I also studies how to improve performances on big tables.
> Like you , I don't know how to improve without index. It's the only way
> I found .
> I find postgresql is fast on small table but I got real performance
> problem when increases the number of rows
> Do anyone know if there is specific postgresql tuning parameters in
> .conf file for big tables ?
>
> max_fsm_pages ?
> max_fsm_relations ?

More often than not, the answer lies not in tuning but in rearranging
how you think of your data and how you create indexes.

If you guys post some schema and queries (with explain analyze) that
aren't running so fast, we'll try to help. although pgsql-perform is
the better place to do that.

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Ivo Rossacher 2007-12-13 18:32:39 Re: odbc problem on Japanese windows machine
Previous Message Kevin Kempter 2007-12-13 17:06:23 pg_restore error