Re: PostgreSQL and Ultrasparc T1

From: "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)Sun(dot)COM>
To: Juan Casero <caseroj(at)comcast(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: PostgreSQL and Ultrasparc T1
Date: 2005-12-20 04:19:25
Message-ID: 43A7864D.1090400@sun.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

I guess it depends on what you term as your metric for measurement.
If it is just one query execution time .. It may not be the best on
UltraSPARC T1.
But if you have more than 8 complex queries running simultaneously,
UltraSPARC T1 can do well compared comparatively provided the
application can scale also along with it.

The best way to approach is to figure out your peak workload, find an
accurate way to measure the "true" metric and then design a benchmark
for it and run it on both servers.

Regards,
Jignesh

Juan Casero wrote:

>Ok. That is what I wanted to know. Right now this database is a PostgreSQL
>7.4.8 system. I am using it in a sort of DSS role. I have weekly summaries
>of the sales for our division going back three years. I have a PHP based
>webapp that I wrote to give the managers access to this data. The webapp
>lets them make selections for reports and then it submits a parameterized
>query to the database for execution. The returned data rows are displayed
>and formatted in their web browser. My largest sales table is about 13
>million rows along with all the indexes it takes up about 20 gigabytes. I
>need to scale this application up to nearly 100 gigabytes to handle daily
>sales summaries. Once we start looking at daily sales figures our database
>size could grow ten to twenty times. I use postgresql because it gives me
>the kind of enterprise database features I need to program the complex logic
>for the queries. I also need the transaction isolation facilities it
>provides so I can optimize the queries in plpgsql without worrying about
>multiple users temp tables colliding with each other. Additionally, I hope
>to rewrite the front end application in JSP so maybe I could use the
>multithreaded features of the Java to exploit a multicore multi-cpu system.
>There are almost no writes to the database tables. The bulk of the
>application is just executing parameterized queries and returning huge
>amounts of data. I know bizgres is supposed to be better at this but I want
>to stay away from anything that is beta. I cannot afford for this thing to
>go wrong. My reasoning for looking at the T1000/2000 was simply the large
>number of cores. I know postgresql uses a super server that forks copies of
>itself to handle incoming requests on port 5432. But I figured the number of
>cores on the T1000/2000 processors would be utilized by the forked copies of
>the postgresql server. From the comments I have seen so far it does not look
>like this is the case. We had originally sized up a dual processor dual core
>AMD opteron system from HP for this but I thought I could get more bang for
>the buck on a T1000/2000. It now seems I may have been wrong. I am stronger
>in Linux than Solaris so I am not upset I am just trying to find the best
>hardware for the anticipated needs of this application.
>
>Thanks,
>Juan
>
>On Monday 19 December 2005 01:25, Scott Marlowe wrote:
>
>
>>From: pgsql-performance-owner(at)postgresql(dot)org on behalf of Juan Casero
>>
>>QUOTE:
>>
>>Hi -
>>
>>
>>Can anyone tell me how well PostgreSQL 8.x performs on the new Sun
>>Ultrasparc T1 processor and architecture on Solaris 10? I have a custom
>>built retail sales reporting that I developed using PostgreSQL 7.48 and PHP
>>on a Fedora Core 3 intel box. I want to scale this application upwards to
>>handle a database that might grow to a 100 GB. Our company is green
>>mission conscious now so I was hoping I could use that to convince
>>management to consider a Sun Ultrasparc T1 or T2 system provided that if I
>>can get the best performance out of it on PostgreSQL.
>>
>>ENDQUOTE:
>>
>>Well, generally, AMD 64 bit is going to be a better value for your dollar,
>>and run faster than most Sparc based machines.
>>
>>Also, PostgreSQL is generally faster under either BSD or Linux than under
>>Solaris on the same box. This might or might not hold as you crank up the
>>numbers of CPUs.
>>
>>PostgreSQL runs one process for connection. So, to use extra CPUs, you
>>really need to have >1 connection running against the database.
>>
>>Mostly, databases tend to be either I/O bound, until you give them a lot of
>>I/O, then they'll be CPU bound.
>>
>>After that lots of memory, THEN more CPUs. Two CPUs is always useful, as
>>one can be servicing the OS and another the database. But unless you're
>>gonna have lots of users hooked up, more than 2 to 4 CPUs is usually a
>>waste.
>>
>>So, I'd recommend a dual core or dual dual core (i.e. 4 cores) AMD64 system
>>with 2 or more gigs ram, and at least a pair of fast drives in a mirror
>>with a hardare RAID controller with battery backed cache. If you'll be
>>trundling through all 100 gigs of your data set regularly, then get all the
>>memory you can put in a machine at a reasonable cost before buying lots of
>>CPUs.
>>
>>But without knowing what you're gonna be doing we can't really make solid
>>recommendations...
>>
>>
>
>---------------------------(end of broadcast)---------------------------
>TIP 4: Have you searched our list archives?
>
> http://archives.postgresql.org
>
>

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Oleg Bartunov 2005-12-20 05:26:29 Re: High context switches occurring
Previous Message Juan Casero 2005-12-20 04:16:36 Re: High context switches occurring