Re: Slow query in trigger function

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Guido Niewerth <gniewerth(at)ocsgmbh(dot)com>
Cc: "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Slow query in trigger function
Date: 2015-11-02 18:10:19
Message-ID: 7702.1446487819@sss.pgh.pa.us
Views: Whole Thread | Raw Message | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Guido Niewerth <gniewerth(at)ocsgmbh(dot)com> writes:
> And this is the execution plan. It looks like it does a slow sequential scan where its able to do an index scan:

> 2015-11-02 17:42:10 CET LOG: duration: 5195.673 ms plan:
> Query Text: SELECT NOT EXISTS( SELECT 1 FROM custom_data WHERE key = old.key LIMIT 1 )
> Result (cost=0.09..0.10 rows=1 width=0) (actual time=5195.667..5195.667 rows=1 loops=1)
> Output: (NOT $0)
> Buffers: shared hit=34 read=351750
> InitPlan 1 (returns $0)
> -> Limit (cost=0.00..0.09 rows=1 width=0) (actual time=5195.662..5195.662 rows=0 loops=1)
> Output: (1)
> Buffers: shared hit=34 read=351750
> -> Seq Scan on public.custom_data (cost=0.00..821325.76 rows=9390835 width=0) (actual time=5195.658..5195.658 rows=0 loops=1)
> Output: 1
> Filter: (custom_data.key = $15)
> Buffers: shared hit=34 read=351750

It looks like you're getting bit by an inaccurate estimate of what will be
the quickest way to satisfy a LIMIT query. In this particular situation,
I'd advise just dropping the LIMIT, as it contributes nothing useful.

(If memory serves, 9.5 will actually ignore constant-LIMIT clauses inside
EXISTS(), because people keep writing them even though they're useless.
Earlier releases do not have that code though.)

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jim Nasby 2015-11-02 18:28:18 Re: GIN index always doing Re-check condition, postgres 9.1
Previous Message Guido Niewerth 2015-11-02 16:54:21 Re: Slow query in trigger function