I was having the same problems with performance with postgres earlier this
year. I used to have my information in a single table with able 30,000 rows.
The problem I was having was that I get an average of about 50,000 users per
day hitting the site and about 375,000 pages to 425,000 pages served per
day. I was usuing MySQL until a friend suggested it to me.
The table isn't all that big, it's the combination of all those people
making queries to the table of that size that is the problem. I would never
see free CPU even in the middle of the night between 4-6AM EST which was
usually a pretty dead time period. Postgres didn't solve the problem that I
was having with MySQL. High CPU.
I totally restructured the way my scripts work and how they interact with
the database. I now have the site creating tables on the fly to hold other
sections of the site with an "index" table that holds the table information
to other spots in the site. It's semi involved so I won't bore you with
Here are my findings with my new setup:
Postgres is MUCH happier to do many more queries that pull from little
tables than to do one query from one bigger table. The biggest table I have
at this point will always be the main index table, which now holds about 400
My new setup I see about 40% CPU free on average.
You can see the final product: http://www.nutz.org/
This is generally an adult humor site and holds rather graphic images so you
guys are being warned in advance. The whole thing runs on PHP and Postgres.
[mailto:pgsql-admin-owner(at)postgresql(dot)org]On Behalf Of Thomas Heller
Sent: Thursday, November 23, 2000 4:52 AM
Subject: Re: [ADMIN] Lack of Performance
> > -B 256
> > -i
> > -N 48
> > -o '-F -S 512'
> I have a gig of ram and use:
> -B 32768
> -o "-F -S 65534"
Hmmm, during peak time these values have no influence to perfomance at all.
:( The values help to decrease the load during "not-so-busy" times, but
during peak times the load is still arround 12-20. This is absolutly
inacceptable for me.
What I dont understand about it, is that the DB is not THAT big. The tables
are arround 10.000-30.000 rows and there are only about 6 tables. They all
use indexes where needed and everything is vacuumed up to 8 times a day. But
the load is not affected by it. I can't seem to find "what" is pressing the
Does query optimization help a lot or does it only affect the performance in
a little manner? Most queries look for rows with a specific primary id and
return parts/the whole row.
Any optimizations hints?
In response to
pgsql-admin by date
|Next:||From: Ragnar Kjørstad||Date: 2000-11-23 16:28:37|
|Subject: Re: Redundant databases/real-time backup|
|Previous:||From: sathya priya||Date: 2000-11-23 14:31:00|
|Subject: PQendcopy: resetting connection error|