Skip site navigation (1) Skip section navigation (2)

Re: Defining performance.

From: Chris <dmagick(at)gmail(dot)com>
To: Tobias Brox <tobias(at)nordicbet(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Defining performance.
Date: 2006-12-01 03:32:05
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
Tobias Brox wrote:
> [nospam(at)hardgeus(dot)com - Thu at 06:37:12PM -0600]
>> As my dataset has gotten larger I have had to throw more metal at the
>> problem, but I have also had to rethink my table and query design.  Just
>> because your data set grows linearly does NOT mean that the performance of
>> your query is guaranteed to grow linearly!  A sloppy query that runs OK
>> with 3000 rows in your table may choke horribly when you hit 50000.
> Then some limit is hit ... either the memory cache, or that the planner
> is doing an unlucky change of strategy when hitting 50000.

Not really. A bad query is a bad query (eg missing a join element). It 
won't show up for 3000 rows, but will very quickly if you increase that 
by a reasonable amount. Even as simple as a missing index on a join 
column won't show up for a small dataset but will for a larger one.

It's a pretty common mistake to assume that a small dataset will behave 
exactly the same as a larger one - not always the case.

Postgresql & php tutorials

In response to

pgsql-performance by date

Next:From: Tobias BroxDate: 2006-12-01 04:03:26
Subject: Re: Defining performance.
Previous:From: Mark KirkwoodDate: 2006-12-01 03:00:05
Subject: Re: Bad iostat numbers

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group