Skip site navigation (1) Skip section navigation (2)

Re: [GENERAL] Drupal and PostgreSQL - performance issues?

From: Simon Waters <simonw(at)zynet(dot)net>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: [GENERAL] Drupal and PostgreSQL - performance issues?
Date: 2008-10-13 14:49:33
Message-ID: 200810131549.33863.simonw@zynet.net (view raw or flat)
Thread:
Lists: pgsql-generalpgsql-performance
On Monday 13 October 2008 15:19:07 Scott Marlowe wrote:
> 
> > shared_buffers = 24MB
> > max_fsm_pages = 153600
>
> Well, 24MB is pretty small.  See if you can increase your system's
> shared memory and postgresql's shared_buffers to somewhere around 256M
> to 512M.  It likely won't make a big difference in this scenario, but
> overall it will definitely help.

I noted after reading earlier messages in the thread, that my distro documents 
that the values it default to for shared_buffers is rather small.

One of our servers is fairly pressed for memory (some of the time). Is there 
any way to measure the amount of churn in the shared_buffers, as a way of 
demonstrating that more is needed (or at this moment more would help)?

A few very small databases on this server, and one which is 768M (still pretty 
small but a lot bigger than the rest, most of which is logging information). 
The only "hot" information is the session table, ~9000 lines, one index on 
the session id. Can I ask Postgres to tell me, or estimate, how much memory 
this table would occupy if fully cached in memory?

Half the problem in modern computing is knowing what is "slow". In this case, 
counting the rows of the session table takes about 100ms. Deleting expired 
session rows about 120ms, more if it hasn't done it for a while, which is I 
guess evidence that table isn't being cached in memory as efficiency as it 
could be.

In this case the server thinks the system I/O is zero for half the tools in 
use, because of the RAID hardware, so most of the Linux based tools are 
useless in this context.

At the risk of thread hijacking, for the session table I wonder if we are 
handling it the most efficient way. It is just a regular table, indexed on 
session_id. Each request of note to the server requires retrieval of the 
session record, and often updating the expiry information. Every N requests 
the application also issues a:

DELETE FROM sessions WHERE expires<NOW() OR expires IS NULL;

Since there is no index on the table, it sequentially scans, and deletes the 
stale records. I'm thinking since it is indexed for regular queries, making N 
larger has almost no obvious penalty except we accumulate a small number of 
stale records for longer. I'm not sure if an index on expires is worth it, 
probably too small to make much difference either way.

As for Drupal on Postgres, it might be worth the effort for big 
implementations, I did it for a while, but doing it again I'd go with MySQL. 
Nothing to do with the database, everything to do with support for 3rd party 
add-ins. Till Drupal gets to the automated testing of these things routinely 
against different backends and configs...... Perhaps that is all that is 
needed, a service for Drupal authors that tries their plugins against 
Postgres automatically and complains if it doesn't work?

In response to

Responses

pgsql-performance by date

Next:From: Carlos MorenoDate: 2008-10-13 14:55:23
Subject: Re: "Mysterious" issues with newly installed 8.3
Previous:From: Scott MarloweDate: 2008-10-13 14:23:21
Subject: Re: [PERFORM] Drupal and PostgreSQL - performance issues?

pgsql-general by date

Next:From: Joshua D. DrakeDate: 2008-10-13 15:02:29
Subject: Re: Drupal and PostgreSQL - performance issues?
Previous:From: John DeSoiDate: 2008-10-13 14:45:17
Subject: Re: Drupal and PostgreSQL - performance issues?

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group