Skip site navigation (1) Skip section navigation (2)

Re: Final decision

From: Josh Berkus <josh(at)agliodbs(dot)com>
To: "Joel Fradkin" <jfradkin(at)wazagua(dot)com>
Cc: "PostgreSQL Perform" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Final decision
Date: 2005-04-27 16:13:35
Message-ID: 200504270913.35952.josh@agliodbs.com (view raw or flat)
Thread:
Lists: pgsql-performance
Joel,

> So I am planning on sticking with postgres fro our production database
> (going live this weekend).

Glad to have you.

> I did not find any resolutions to my issues with Commandprompt.com (we only
> worked together 2.5 hours).

BTW, your performance troubleshooting will continue to be hampered if you 
can't share actual queries and data structure.   I strongly suggest that you 
make a confidentiality contract with  a support provider so that you can give 
them detailed (rather than general) problem reports.

> Most of my application is working about the same speed as MSSQL server
> (unfortunately its twice the speed box, but as many have pointed out it
> could be an issue with the 4 proc dell). I spent considerable time with
> Dell and could see my drives are delivering 40 meg per sec.

FWIW, on a v40z I get 180mb/s.   So your disk array on the Dell is less than 
ideal ... basically, what you have is a more expensive box, not a faster 
one :-(

> Things I still have to make better are my settings in config, I have it set
> to no merge joins and no seq scans.

Yeah, I'm also finding that our estimator underestimates the real cost of 
merge joins on some systems.    Basically we need a sort-cost variable, 
because I've found an up to 2x difference in sort cost depending on 
architecture.

> I am going to have to use flattened history files for reporting (I saw huge
> difference here the view for audit cube took 10 minutes in explain analyze
> and the flattened file took under one second).
> I understand both of these practices are not desirable, but I am at a place
> where I have to get it live and these are items I could not resolve.

Flattening data for reporting is completely reasonable; I do it all the time.

> I believe that was totally IIS not postgres, but I am curious as to if
> using postgres odbc will put more stress on the IIS side then MSSQL did.

Actually, I think the problem may be ODBC.   Our ODBC driver is not the best 
and is currently being re-built from scratch.   Is using npgsql, a much 
higher-performance driver (for .NET) out of the question?  According to one 
company, npgsql performs better than drivers supplied by Microsoft.

> I did have a question if any folks are using two servers one for reporting
> and one for data entry what system should be the beefier?

Depends on the relative # of users.    This is often a good approach, because 
the requirements for DW reporting and OLTP are completely different.  
Basically:
OLTP: Many slow processors, disk array set up for fast writes, moderate shared 
mem, low work_mem.
DW: Few fast processors, disk array set up for fast reads, high shared mem and 
work mem.

If reporting is at least 1/4 of your workload, I'd suggest spinning that off 
to the 2nd machine before putting one client on that machine.    That way you 
can also use the 2nd machine as a failover back-up.

-- 
Josh Berkus
Aglio Database Solutions
San Francisco

In response to

Responses

pgsql-performance by date

Next:From: Dave PageDate: 2005-04-27 16:29:09
Subject: Re: Final decision
Previous:From: Richard RowellDate: 2005-04-27 16:07:14
Subject: Suggestions for a data-warehouse migration routine

Privacy Policy | About PostgreSQL
Copyright © 1996-2014 The PostgreSQL Global Development Group