Re: Serious performance problem

From: "Tille, Andreas" <TilleA(at)rki(dot)de>
To: pgsql-hackers(at)postgresql(dot)org
Subject: Re: Serious performance problem
Date: 2001-10-30 14:09:32
Message-ID: Pine.LNX.4.33.0110301417250.6117-100000@wr-linux02.rki.ivbb.bund.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgadmin-hackers pgsql-hackers

On Tue, 30 Oct 2001, Jean-Michel POURE wrote:

> Is your database read-only?
Daily update from MS-SQL server. Between updates it is Read-only.

> Good point, sorry to insist your problem is
> software optimization. In your case, the database may climb up to 200
> million rows (1000 days x 200.000 rows). What are you going to do then? Buy
> a 16 Itanium computer with 10 Gb RAM and MS SQL Server licence. Have a
> close look at your problem. How much time does it get MS SQL Server to
> query 200 million rows ? The problem is not in choosing MS SQL or
> PostgreSQL ...
The problem is for sure. If one server is 10 to 30 times faster for the very
same tasks and chances are high that it skales better for the next orders of
magnitude where our data will fit in for the next years because of real
index usage (see postings on the hackers list) than the decission is easy.
My colleague made sure that MS SQL server is fit for the next years and
I can only convince him if an other Server has a comparable speed *for the
same task*.

> If you are adding 200.000 rows data everyday, consider using a combination
I do not add this much.

> of CREATE TABLE AS to create a result table with PL/pgSQL triggers to
> maintain data consistency. You will then get instant results, even on 2
> billion rows because you will always query the result table; not the
> original one. Large databases are always optimized this way because, even
> in case of smart indexes, there are things (like your problem) that need
> *smart* optimization.
>
> Do you need PL/pgSQL source code to perform a test on 2 billion rows? If
> so, please email me on pgsql-general and I will send you the code.
I really believe that there are many problems in the world that fall under
this category and you are completely right. My coleague is a database
expert (I consider me as a beginner) and he made sure that performance is
no issue for the next couple of years. So what? Spending hours in
optimisation into things who work perfectly? Why not asking the
PostgreSQL authors to optimize tha server this way the very same task
performs comparable?????? If we afterwards need further database
optimization because of further constrains, I´m the first who will start
this. But there must be server code in the world that is able to answer
the example query that fast. This is proven!

Kind regards

Andreas.

In response to

Browse pgadmin-hackers by date

  From Date Subject
Next Message Dave Page 2001-11-01 09:17:45 New Graphics
Previous Message Jean-Michel POURE 2001-10-30 12:06:44 Re: [HACKERS] Serious performance problem

Browse pgsql-hackers by date

  From Date Subject
Next Message Marc G. Fournier 2001-10-30 14:27:35 Re: pgsql-committers?
Previous Message Tille, Andreas 2001-10-30 14:09:00 Re: Serious performance problem