Skip site navigation (1) Skip section navigation (2)

Re: Performance question 83 GB Table 150 million rows, distinct select

From: Tory M Blue <tmblue(at)gmail(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Performance question 83 GB Table 150 million rows, distinct select
Date: 2011-11-17 00:45:01
Message-ID: CAEaSS0btTbnEBkGd9pOtTG9jqnr+2qynSCAQrRRa33CCBOByqA@mail.gmail.com (view raw or flat)
Thread:
Lists: pgsql-performance
Thanks all,  I misspoke on our use of the index.

We do have an index on log_date and it is being used here is the
explain analyze plan.



'Aggregate  (cost=7266186.16..7266186.17 rows=1 width=8) (actual
time=127575.030..127575.030 rows=1 loops=1)'
'  ->  Bitmap Heap Scan on userstats  (cost=135183.17..7240890.38
rows=10118312 width=8) (actual time=8986.425..74815.790 rows=33084417
loops=1)'
'        Recheck Cond: (log_date > '2011-11-04'::date)'
'        ->  Bitmap Index Scan on idx_userstats_logdate
(cost=0.00..132653.59 rows=10118312 width=0) (actual
time=8404.147..8404.147 rows=33084417 loops=1)'
'              Index Cond: (log_date > '2011-11-04'::date)'
'Total runtime: 127583.898 ms'

Partitioning Tables

This is use primarily when you are usually accessing only a part of
the data. We want our queries to go across the entire date range.  So
we don't really meet the criteria for partitioning (had to do some
quick research).

Thanks again
Tory

In response to

pgsql-performance by date

Next:From: Scott MarloweDate: 2011-11-17 01:57:54
Subject: Re: Performance question 83 GB Table 150 million rows, distinct select
Previous:From: Tomas VondraDate: 2011-11-16 23:59:38
Subject: Re: Performance question 83 GB Table 150 million rows, distinct select

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group