Re: ported application having performance issues

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: John Mendenhall <john(at)surfutopia(dot)net>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: ported application having performance issues
Date: 2005-06-30 22:43:34
Message-ID: 22503.1120171414@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

John Mendenhall <john(at)surfutopia(dot)net> writes:
> Merge Join (cost=4272.84..4520.82 rows=1230 width=21) (actual time=3998.771..4603.739 rows=699 loops=1)
> Merge Cond: ("outer".contact_id = "inner".id)
> -> Index Scan using lead_requests_contact_id_idx on lead_requests lr (cost=0.00..74.09 rows=1435 width=21) (actual time=0.070..22.431 rows=1430 loops=1)
> -> Sort (cost=4272.84..4352.28 rows=31775 width=11) (actual time=3998.554..4130.067 rows=32640 loops=1)
> Sort Key: c.id
> -> Seq Scan on contacts c (cost=0.00..1896.77 rows=31775 width=11) (actual time=0.040..326.135 rows=32501 loops=1)
> Filter: (partner_id IS NULL)
> Total runtime: 4611.323 ms

Hmm ... even on a SPARC, it doesn't seem like it should take 4 seconds
to sort 30000 rows. You can certainly see that the planner is not
expecting that (it's estimating a sort cost comparable to the scan cost,
which if true would put this in the sub-second ballpark).

Does increasing sort_mem help?

Have you considered using some other datatype than "numeric" for your
keys? Numeric may be fast on Oracle but it's not amazingly fast on
Postgres. bigint would be better, if you don't really need 38 digits;
if you do, I'd be inclined to think about plain char or varchar keys.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message John Mendenhall 2005-06-30 23:58:21 Re: ported application having performance issues
Previous Message Joshua D. Drake 2005-06-30 22:35:38 Re: ported application having performance issues