Re: Dealing with big tables

From: "Mindaugas" <ml(at)kilimas(dot)com>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Dealing with big tables
Date: 2007-12-02 12:35:45
Message-ID: E1Iyo2z-00055w-E1@fenris.runbox.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> What exactly is your goal? Do you need this query to respond in under a
> specific limit? What limit? Do you need to be able to execute many instances
> of this query in less than 5s * the number of executions? Or do you have more
> complex queries that you're really worried about?

I'd like this query to respond under a specific time limit. 5s now is OK but 50s later for 10000 rows is too slow.

> Both Greenplum and EnterpriseDB have products in this space which let you
> break the query up over several servers but at least in EnterpriseDB's case
> it's targeted towards running complex queries which take longer than this to
> run. I doubt you would see much benefit for a 5s query after the overhead of
> sending parts of the query out to different machines and then reassembling the
> results. If your real concern is with more complex queries they may make sense
> though. It's also possible that paying someone to come look at your database
> will find other ways to speed it up.

I see. This query also should benefit alot even when run in parallel on one server. Since anyway most time it spends in waiting for storage to respond.

Also off list I was pointed out about covering indexes in MySQL. But they are not supported in PostgreSQL, aren't they?

Mindaugas

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2007-12-02 13:30:52 Re: Training Recommendations
Previous Message Gregory Stark 2007-12-02 12:07:37 Re: Dealing with big tables