Re: Parallel Seq Scan

From: Robert Haas <robertmhaas(at)gmail(dot)com>
To: Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>
Cc: Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, Kouhei Kaigai <kaigai(at)ak(dot)jp(dot)nec(dot)com>, Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>, Jeff Davis <pgsql(at)j-davis(dot)com>, Andres Freund <andres(at)2ndquadrant(dot)com>, Amit Langote <amitlangote09(at)gmail(dot)com>, Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>, Fabrízio Mello <fabriziomello(at)gmail(dot)com>, Thom Brown <thom(at)linux(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Parallel Seq Scan
Date: 2015-09-23 16:37:15
Message-ID: CA+TgmobdyE1UmS-oTXG1axc06NKELnYiVKPahvAcOhfMaRyZQA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Tue, Sep 22, 2015 at 3:14 AM, Haribabu Kommi
<kommi(dot)haribabu(at)gmail(dot)com> wrote:
> copy_user_generic_string system call is because of file read operations.
> In my test, I gave the shared_buffers as 12GB with the table size of 18GB.

OK, cool. So that's actually good: all that work would have to be
done either way, and parallelism lets several CPUs work on it at once.

> The _spin_lock calls are from the signals that are generated by the workers.
> With the increase of tuple queue size, there is a change in kernel system
> calls usage.

And this part is not so good: that's additional work created by
parallelism that wouldn't have to be done if we weren't in parallel
mode. Of course, it's impossible to eliminate that, but we should try
to reduce it.

> - From the above performance readings, increase of tuple queue size
> gets benefited with lesser
> number of workers compared to higher number of workers.

That makes sense to me, because there's a separate queue for each
worker. If we have more workers, then the total amount of queue space
available rises in proportion to the number of workers available.

> Workers are getting started irrespective of the system load. If user
> configures 16 workers, but
> because of a sudden increase in the system load, there are only 2 or 3
> cpu's are only IDLE.
> In this case, if any parallel seq scan eligible query is executed, the
> backend may start 16 workers
> thus it can lead to overall increase of system usage and may decrease
> the performance of the
> other backend sessions?

Yep, that could happen. It's something we should work on, but the
first version isn't going to try to be that smart. It's similar to
the problem we already have with work_mem, and I want to work on it,
but we need to get this working first.

> If the query have two parallel seq scan plan nodes and how the workers
> will be distributed across
> the two nodes? Currently parallel_seqscan_degree is used per plan
> node, even if we change that
> to per query, I think we need a worker distribution logic, instead of
> using all workers by a single
> plan node.

Yes, we need that, too. Again, at some point.

> Select with a limit clause is having a performance drawback with
> parallel seq scan in some scenarios,
> because of very less selectivity compared to seq scan, it should be
> better if we document it. Users
> can take necessary actions based on that for the queries with limit clause.

This is something I want to think further about in the near future.
We don't have a great plan for shutting down workers when no further
tuples are needed because, for example, an upper node has filled a
limit. That makes using parallel query in contexts like Limit and
InitPlan significantly more costly than you might expect. Perhaps we
should avoid parallel plans altogether in those contexts, or maybe
there is some other approach that can work. I haven't figured it out
yet.

--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Peter Eisentraut 2015-09-23 16:37:37 Re: unclear about row-level security USING vs. CHECK
Previous Message Peter Eisentraut 2015-09-23 16:26:43 Re: hot_standby_feedback default and docs