> I am trying to optimize the count of files when I am using filters
> (select by some row/s parameter/s)
> In this case I think that postgresql really count all files.
> Resulting in unacceptable times of 4 seconds in http server response.
> Triggers+store in this case do not see very acceptable, because I need
> store 1.5 millions of counting possibilities.
> My question is:
> Any method for indirect count like ordered indexes + quadratic count?
> Any module?
> Any suggestion?
I had a similar problem where HTTP requests triggered a count(*) over a
table that was growing rapidly. The bigger the table got, the longer
the count took. In my case, however, the counts only have to be a
reasonable estimate of the current state, so I solved this problem with
a count_sums table that gets updated every 30 minutes using a simple
perl script in a cron job. The HTTP requests now trigger a very fast
select from a tiny, 9 row, 2 column table.
How "up to date" do the counts need to be? If the count takes 4
seconds, can you run it every minute and store the counts in a table for
retrieval by the HTTP requests? Or does it absolutely have to be the
exact count at the moment of the request?
If it needs to be more real-time, you could expand on this by adding
post insert/delete triggers that automatically update the counts table
to keep it current. In my case it just wasn't necessary.
In response to
pgsql-performance by date
|Next:||From: David Wilson||Date: 2009-07-27 15:25:23|
|Subject: Re: More speed counting rows|
|Previous:||From: Kevin Grittner||Date: 2009-07-27 14:43:38|
|Subject: Re: select query performance question|