Skip site navigation (1) Skip section navigation (2)

Re: Performance advice

From: "Michael Mattox" <michael(dot)mattox(at)verideon(dot)com>
To: <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Performance advice
Date: 2003-06-24 15:47:38
Message-ID: CJEBLDCHAADCLAGIGCOOGEJLCKAA.michael.mattox@verideon.com (view raw or flat)
Thread:
Lists: pgsql-performance
I want to thank everyone for their help and post a status update.  I've made
quite a bit of improvements.  Here's what all I did:

I refactored my algorithm, instead of updating the timestamp, monitoring the
website, and then updating the status (two transactions), I wait and update
the timestamp and status at the same time (one transaction).  This required
using a hashtable to contain active monitors so that I don't add a monitor
to the queue while it's executing (I check to make sure it's not in the
queue and not executing before adding it to the queue).  This cut down my
transactions by a factor of 2.

I changed the postgres.conf settings as suggested by several people.  I've
attached it to this email, please let me know if you see anything else I can
tweak.  top still says I have plenty of ram, so should I increase the
buffers and/or effective_cache even more?

Mem:  1547572K av, 1537212K used,   10360K free,       0K shrd,  107028K
buff
Swap: 1044216K av,   14552K used, 1029664K free                 1192280K
cached

I moved the WAL (pg_xlog directory) to another drive.  There are two drives
in the system, so one has the OS, servers, all files, and the WAL and the
other has nothing but the data.  I think it'd be best to put the WAL on a
separate drive from the OS but I don't know if I can get another drive added
just for that due to our limited budget.

I learned that I only need to vacuum tables that are changed frequently.  My
app doesn't do any deletes, and only one table changes, the monitor table.
several times a second.  So I only need to vacuum that table.  Vacuuming the
entire database is slow and unecessary.  If I only do the monitor table, it
takes only a few seconds.  Much better than the 35 minutes for the entire
database that it was taking this morning.

Result of all this?  Before a monitor operation (update timestamp, download
webpage, update status) was taking 5-6 seconds each, and up to a minute
during a vacuum.  Now it takes less than 1 second.  Part of this is because
I can run 8 threads instead of 32 due to the other optimizations.

I want to thank everyone for their input.  I've heard Postgres is slow and
doesn't scale, but now I do it's really just a matter of learning to
configure it properly and trial & error.  I do think the documentation could
be enhanced a bit here, but I'm sure there are some users who don't make
this effort and end up switching to another database, which is bad for
Postgres' image.  Anyway, I hope my summary can help others who may find
this email in the archives.

Regards,
Michael


> -----Original Message-----
> From: pgsql-performance-owner(at)postgresql(dot)org
> [mailto:pgsql-performance-owner(at)postgresql(dot)org]On Behalf Of Michael
> Mattox
> Sent: Tuesday, June 24, 2003 9:40 AM
> To: pgsql-performance(at)postgresql(dot)org
> Subject: [PERFORM] Performance advice
>
>
> I'd like to get some feedback on my setup to see if I can optimize my
> database performance.  My application has two separate applications:
>
> The first application connects to websites and records the
> statistics in the
> database.  Websites are monitored every 5 or 10 minutes (depends
> on client),
> there are 900 monitors which comes out to 7,800 monitorings per hour.  The
> monitor table has columns "nextdate" and "status" which are updated with
> every monitoring, and also a row is inserted into the status table and the
> status item table.  For my performance testing (we're just about
> to go live)
> I've loaded the database with a month of data (we don't plan to keep data
> longer than 1 month).  So my status table has 6 million records and my
> status item table has 6 million records as well.  One key is that
> the system
> is multithreaded so up to 32 processes are accessing the database at the
> same time, updating the "nextdate" before the monitoring and inserting the
> status and status item records after.  There is a serious performance
> constraint here because unlike a webserver, this application cannot slow
> down.  If it slows down, we won't be able to monitor our sites at 5 minute
> intervals which will make our customers unhappy.
>
> The second application is a web app (tomcat) which lets customers check
> their status.  Both of these applications are deployed on the
> same server, a
> 4 CPU (Xeon) with 1.5 gigs of RAM.  The OS (RedHat Linux 7.3) and servers
> are running on 18gig 10,000 RPM SCSI disk that is mirrored to a 2nd disk.
> The database data directory is on a separate 36 gig 10,000 RPM SCSI disk
> (we're trying to buy a 2nd disk to mirror it).  I'm using Postgres 7.3.2.
>
> Issue #1 - Vacuum => Overall the system runs pretty well and seems stable.
> Last night I did a "vacuum full analyze" and then ran my app overnight and
> first thing in the morning I did a "vacuum analyze", which took
> 35 minutes.
> I'm not sure if this is normal for a database this size (there are 15,000
> updates per hour).  During the vacuum my application does slow
> down quite a
> bit and afterwards is slow speeds back up.  I've attached the
> vacuum output
> to this mail.  I'm using Java Data Objects (JDO) so if table/column names
> look weird it's because the schema is automatically generated.
>
> Issue #2 - postgres.conf => I'd love to get some feedback on
> these settings.
> I've read the archives and no one seems to agree I know, but with
> the above
> description of my app I hope someone can at least point me in the right
> direction:
>
> max_connections = 200
>
> #
> #       Shared Memory Size
> #
> shared_buffers = 3072           # min max_connections*2 or 16, 8KB each
> #max_fsm_relations = 1000       # min 10, fsm is free space map, ~40 bytes
> #max_fsm_pages = 10000          # min 1000, fsm is free space
> map, ~6 bytes
> #max_locks_per_transaction = 64 # min 10
> #wal_buffers = 8                # min 4, typically 8KB each
>
> #
> #       Non-shared Memory Sizes
> #
> sort_mem = 8192         # min 64, size in KB
> vacuum_mem = 24576              # min 1024, size in KB
>
> The rest are left uncommented (using the defaults).
>
> Issue #3 - server hardware =>
>
> - Is there anything I can do with the hardware to increase performance?
>
> - Should I increase the ram to 2 gigs?  top shows that it is
> using the swap
> a bit (about 100k only).
>
> - I have at my disposal one other server which has 2 Xeons,
> 10,000 RPM SCSI
> drive.  Would it make sense to put Postgres on it and leave my
> apps running
> on the more powerful 4 CPU server?
>
> - Would a RAID setup make the disk faster?  Because top rarely shows the
> CPUs above 50%, I suspect maybe the disk is the bottleneck.
>
> I'm thrilled to be able to use Postgres instead of a commercial
> database and
> I'm looking forward to putting this into production.  Any help with the
> above questions would be greatly appreciated.
>
> Michael Mattox
>
>

Attachment: postgresql.conf
Description: application/octet-stream (5.1 KB)

In response to

Responses

pgsql-performance by date

Next:From: Josh BerkusDate: 2003-06-24 16:04:35
Subject: Re: Performance advice
Previous:From: Tom LaneDate: 2003-06-24 14:18:57
Subject: Re: Performance advice

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group