From: | Josh Berkus <josh(at)agliodbs(dot)com> |
---|---|
To: | Max Baker <max(at)warped(dot)org>, Robert Treat <xzilla(at)users(dot)sourceforge(dot)net> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Joe Conway <mail(at)joeconway(dot)com>, Manfred Koizar <mkoi-pg(at)aon(dot)at>, PostgreSQL Performance Mailing List <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: speeding up COUNT and DISTINCT queries |
Date: | 2003-03-14 17:10:06 |
Message-ID: | 200303140910.06416.josh@agliodbs.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Max,
> I'm not starting with fresh data every time, I'm usually checking for
> an existing record, then setting a timestamp and a boolean flag.
>
> I've run some profiling and it's about 8000-10,000 UPDATEs every 4
> hours. These are accompanied by about 800-1000 INSERTs.
If these are wide records (i.e. large text fields or lots of columns ) you may
want to consider raising your max_fsm_relation in postgresql.conf slightly,
to about 15,000.
You can get a better idea of a good FSM setting by running VACUUM FULL VERBOSE
after your next batch (this will lock the database temporarily) and seeing
how many data pages are "reclaimed", in total, by the vacuum. Then set your
FSM to at least that level.
And has anyone mentioned REINDEX on this thread?
--
Josh Berkus
Aglio Database Solutions
San Francisco
From | Date | Subject | |
---|---|---|---|
Next Message | Aaron Krowne | 2003-03-16 06:01:25 | postgresql meltdown on PlanetMath.org |
Previous Message | Hannu Krosing | 2003-03-13 22:26:20 | Re: Postgresql performance |