Re: Large databases, performance

From: Robert Treat <xzilla(at)users(dot)sourceforge(dot)net>
To: shridhar_daithankar(at)persistent(dot)co(dot)in
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Large databases, performance
Date: 2002-10-03 16:26:34
Message-ID: 1033662394.21324.59.camel@camel
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general pgsql-hackers pgsql-performance pgsql-sql

On Thu, 2002-10-03 at 12:17, Shridhar Daithankar wrote:
> On 3 Oct 2002 at 11:57, Robert Treat wrote:
> May be it's time to rewrite famous myth that postgresql is slow.

That myth has been dis-proven long ago, it just takes awhile for
everyone to catch on ;-)

When properly
> tuned or given enough head room, it's almost as fast as mysql..
>
> > I'm curious, did you happen to run the select tests while also running
> > the insert tests? IIRC the older mysql versions have to lock the table
> > when doing the insert, so select performance goes in the dumper in that
> > scenario, perhaps that's not an issue with 3.23.52?
>
> IMO even if it locks tables that shouldn't affect select performance. It would
> be fun to watch when we insert multiple chunks of data and fire queries
> concurrently. I would be surprised if mysql starts slowing down..
>

Hmm... been awhile since I dug into mysql internals, but IIRC once the
table was locked, you had to wait for the insert to complete so the
table would be unlocked and the select could go through. (maybe this is
a myth that I need to get clued in on)

> > It also seems like the vacuum after each insert is unnecessary, unless
> > your also deleting/updating data behind it. Perhaps just running an
> > ANALYZE on the table would suffice while reducing overhead.
>
> I believe that was vacuum analyze only. But still it takes lot of time. Good
> thing is it's not blocking..
>
> Anyway I don't think such frequent vacuums are going to convince planner to
> choose index scan over sequential scan. I am sure it's already convinced..
>

My thinking was that if your just doing inserts, you need to update the
statistics but don't need to check on unused tuples.

Robert Treat

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Shridhar Daithankar 2002-10-03 16:30:18 Re: [HACKERS] Large databases, performance
Previous Message Greg Copeland 2002-10-03 16:23:28 Re: [HACKERS] Large databases, performance

Browse pgsql-hackers by date

  From Date Subject
Next Message Manfred Koizar 2002-10-03 16:27:12 Re: Correlation in cost_index()
Previous Message Greg Copeland 2002-10-03 16:23:28 Re: [HACKERS] Large databases, performance

Browse pgsql-performance by date

  From Date Subject
Next Message Mike Benoit 2002-10-03 16:29:21 subscribe pgsql-performance
Previous Message Greg Copeland 2002-10-03 16:23:28 Re: [HACKERS] Large databases, performance

Browse pgsql-sql by date

  From Date Subject
Next Message Shridhar Daithankar 2002-10-03 16:30:18 Re: [HACKERS] Large databases, performance
Previous Message Greg Copeland 2002-10-03 16:23:28 Re: [HACKERS] Large databases, performance