Skip site navigation (1) Skip section navigation (2)

Re: Simple query, 10 million records...MySQL ten times faster

From: "Merlin Moncure" <mmoncure(at)gmail(dot)com>
To: zardozrocks <zardozrocks(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Simple query, 10 million records...MySQL ten times faster
Date: 2007-04-26 21:09:28
Message-ID: (view raw, whole thread or download thread mbox)
Lists: pgsql-performance
On 24 Apr 2007 14:26:46 -0700, zardozrocks <zardozrocks(at)gmail(dot)com> wrote:
> I have this table:
> CREATE TABLE test_zip_assoc (
>     id serial NOT NULL,
>     f_id integer DEFAULT 0 NOT NULL,
>     lat_radians numeric(6,5) DEFAULT 0.00000 NOT NULL,
>     long_radians numeric(6,5) DEFAULT 0.00000 NOT NULL
> );
> CREATE INDEX lat_radians ON test_zip_assoc USING btree (lat_radians);
> CREATE INDEX long_radians ON test_zip_assoc USING btree
> (long_radians);
> It's basically a table that associates some foreign_key (for an event,
> for instance) with a particular location using longitude and
> latitude.  I'm basically doing a simple proximity search.  I have
> populated the database with *10 million* records.  I then test
> performance by picking 50 zip codes at random and finding the records
> within 50 miles with a query like this:
>         FROM test_zip_assoc
>         WHERE
>                 lat_radians > 0.69014816041
>                 AND lat_radians < 0.71538026567
>                 AND long_radians > -1.35446228028
>                 AND long_radians < -1.32923017502
> On my development server (dual proc/dual core Opteron 2.8 Ghz with 4GB
> ram) this query averages 1.5 seconds each time it runs after a brief
> warmup period.  In PostGreSQL it averages about 15 seconds.
> Both of those times are too slow.  I need the query to run in under a
> second with as many as a billion records.  I don't know if this is
> possible but I'm really hoping someone can help me restructure my
> indexes (multicolumn?, multiple indexes with a 'where' clause?) so
> that I can get this running as fast as possible.
> If I need to consider some non-database data structure in RAM I will
> do that too.  Any help or tips would be greatly appreciated.  I'm
> willing to go to greath lengths to test this if someone can make a
> good suggestion that sounds like it has a reasonable chance of
> improving the speed of this search.  There's an extensive thread on my
> efforts already here:

You can always go the earthdist route.  the index takes longer to
build (like 5x) longer than btree, but will optimize that exact


In response to

pgsql-performance by date

Next:From: Bill MoranDate: 2007-04-26 21:11:38
Subject: Re: Simple query, 10 million records...MySQL ten times faster
Previous:From: Ron JohnsonDate: 2007-04-26 19:50:24
Subject: Re: Re: Feature request - have postgresql log warning when new sub-release comes out.

Privacy Policy | About PostgreSQL
Copyright © 1996-2017 The PostgreSQL Global Development Group