Skip site navigation (1) Skip section navigation (2)

Re: mysql to postgresql, performance questions

From: Stephen Frost <sfrost(at)snowman(dot)net>
To: Corin <wakathane(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: mysql to postgresql, performance questions
Date: 2010-03-18 15:09:00
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance

* Corin (wakathane(at)gmail(dot)com) wrote:
> I'm running quite a large social community website (250k users, 16gb  
> database). We are currently preparing a complete relaunch and thinking  
> about switching from mysql 5.1.37 innodb to postgresql 8.4.2. The  
> database server is a dual dualcore operton 2216 with 12gb ram running on  
> debian amd64.

Can you provide at least your postgresql.conf?  That could be useful,
though this does seem like a really simple query.

> For a first impression I ran a simple query on our users table (snapshot  
> with only ~ 45.000 records). The table has an index on birthday_age  
> [integer]. The test executes 10 times the same query and simply discards  
> the results. I ran the tests using a php and a ruby script, the results  
> are almost the same.

I wouldn't expect it to matter a whole lot, but have you considered
using prepared queries?

> Unluckily mysql seems to be around 3x as fast as postgresql for this  
> simple query. There's no swapping, disc reading involved...everything is  
> in ram.
> query
> select * from users where birthday_age between 12 and 13 or birthday_age  
> between 20 and 22 limit 1000

Do you use every column from users, and do you really want 1000 records

> {"QUERY PLAN"=>"Total runtime: 5.847 ms"}

This runtime is the amount of time it took for the backend to run the

> 44.173002243042

These times are including all the time required to get the data back to
the client.  If you don't use cursors, all data from the query is
returned all at once.  Can you post the script you're using along with
the table schema and maybe some sample or example data?  Also, are you
doing this all inside a single transaction, or are you creating a new
transaction for every query?  I trust you're not reconnecting to the
database for every query..

> I also wonder why the reported runtime of 5.847 ms is so much different  
> to the runtime reported of my scripts (both php and ruby are almost the  
> same). What's the best tool to time queries in postgresql? Can this be  
> done from pgadmin?

As was mentioned elsewhere, certainly the best tool to test with is your
actual application, if that's possible..  Or at least the language your
application is in.



In response to


pgsql-performance by date

Next:From: Greg SmithDate: 2010-03-18 15:16:35
Subject: Re: shared_buffers advice
Previous:From: Grzegorz JaśkiewiczDate: 2010-03-18 15:08:44
Subject: Re: mysql to postgresql, performance questions

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group