Re: Performance question 83 GB Table 150 million rows, distinct select

From: Alan Hodgson <ahodgson(at)simkin(dot)ca>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance question 83 GB Table 150 million rows, distinct select
Date: 2011-11-16 23:27:57
Message-ID: 201111161527.57443.ahodgson@simkin.ca
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On November 16, 2011 02:53:17 PM Tory M Blue wrote:
> We now have about 180mill records in that table. The database size is
> about 580GB and the userstats table which is the biggest one and the
> one we query the most is 83GB.
>
> Just a basic query takes 4 minutes:
>
> For e.g. select count(distinct uid) from userstats where log_date
> >'11/7/2011'
>
> Just not sure if this is what to expect, however there are many other
> DB's out there bigger than ours, so I'm curious what can I do?

That query should use an index on log_date if one exists. Unless the planner
thinks it would need to look at too much of the table.

Also, the normal approach to making large statistics tables more manageable is
to partition them by date range.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Scott Marlowe 2011-11-16 23:32:09 Re: Performance question 83 GB Table 150 million rows, distinct select
Previous Message Tory M Blue 2011-11-16 22:53:17 Performance question 83 GB Table 150 million rows, distinct select