Re: Increasing work_mem slows down query, why?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
Cc: Silvio Moioli <moio(at)suse(dot)de>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Increasing work_mem slows down query, why?
Date: 2020-03-30 16:36:17
Message-ID: 19893.1585586177@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> writes:
> CTE scan has only 1100 rows, public.rhnpackagecapability has 490964 rows.
> But planner does hash from public.rhnpackagecapability table. It cannot be
> very effective.

[ shrug... ] Without stats on the CTE output, the planner is very
leery of putting it on the inside of a hash join. The CTE might
produce output that ends up in just a few hash buckets, degrading
the join to something not much better than a nested loop. As long
as there's enough memory to hash the known-well-distributed table,
putting it on the inside is safer and no costlier.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Pavel Stehule 2020-03-30 16:49:22 Re: Increasing work_mem slows down query, why?
Previous Message Pavel Stehule 2020-03-30 16:18:17 Re: Increasing work_mem slows down query, why?