|From:||Rafia Sabih <rafia(dot)sabih(at)enterprisedb(dot)com>|
|To:||Ashutosh Bapat <ashutosh(dot)bapat(at)enterprisedb(dot)com>|
|Cc:||Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Peter Geoghegan <pg(at)heroku(dot)com>, Haribabu Kommi <kommi(dot)haribabu(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>|
|Subject:||Re: WIP: [[Parallel] Shared] Hash|
|Views:||Raw Message | Whole Thread | Download mbox|
On Wed, Feb 1, 2017 at 10:13 AM, Ashutosh Bapat
> >> However, ExecHashIncreaseNumBatches() may change the
> >> number of buckets; the patch does not seem to account for spaceUsed changes
> >> because of that.
> > That's what this hunk is intended to do:
> > @@ -795,6 +808,12 @@ ExecHashIncreaseNumBuckets(HashJoinTable hashtable)
> > TRACE_POSTGRESQL_HASH_INCREASE_BUCKETS(hashtable->nbuckets,
> > hashtable->nbuckets_optimal);
> > + /* account for the increase in space that will be used by buckets */
> > + hashtable->spaceUsed += sizeof(HashJoinTuple) *
> > + (hashtable->nbuckets_optimal - hashtable->nbuckets);
> > + if (hashtable->spaceUsed > hashtable->spacePeak)
> > + hashtable->spacePeak = hashtable->spaceUsed;
> > +
> Sorry, I missed that hunk. You are right, it's getting accounted for.
> >> In ExecHashTableReset(), do we want to update spacePeak while setting
> >> spaceUsed.
> > I figured there was no way that the new spaceUsed value could be
> > bigger than spacePeak, because we threw out all chunks and have just
> > the bucket array, and we had that number of buckets before, so
> > spacePeak must at least have been set to a number >= this number
> > either when we expanded to this many buckets, or when we created the
> > hashtable in the first place. Perhaps I should
> > Assert(hashtable->spaceUsed <= hashtable->spacePeak).
> That would help, better if you explain this with a comment before Assert.
> >> While this patch tracks space usage more accurately, I am afraid we might be
> >> overdoing it; a reason why we don't track space usage accurately now. But I
> >> think I will leave it to be judged by someone who is more familiar with the
> >> code and possibly has historical perspective.
> > Well it's not doing more work; it doesn't make any practical
> > difference whatsoever but it's technically doing less work than
> > master, by doing memory accounting only when acquiring a new 32KB
> > chunk.
> This patch does more work while counting the space used by buckets, I
> guess. AFAIU, right now, that happens only after the hash table is
> built completely. But that's fine. I am not worried about whether the
> it's less work or more.
> > But if by overdoing it you mean that no one really cares about
> > the tiny increase in accuracy so the patch on its own is a bit of a
> > waste of time, you're probably right.
> This is what I meant by overdoing; you have spelled it better.
> > Depending on tuple size, you
> > could imagine something like 64 bytes of header and unused space per
> > 32KB chunk that we're not accounting for, and that's only 0.2%. So I
> > probably wouldn't propose this refactoring just on accuracy grounds
> > alone.
> > This refactoring is intended to pave the way for shared memory
> > accounting in the later patches, which would otherwise generate ugly
> > IPC if done for every time a tuple is allocated. I considered using
> > atomic add to count space per tuple, or maintaining per-backend local
> > subtotals and periodically summing. Then I realised that switching to
> > per-chunk accounting would fix the IPC problem AND be justifiable on
> > theoretical grounds. When we allocate a new 32KB chunk, we really are
> > using 32KB more of your memory.
> Thanks for considering the comments.
I was performing performance analysis of this set of patches on TPC-H
higher scale factor and came across following cases:
- Only 6 queries are using parallel hash
- Q8, is showing regression from 8 seconds on head to 15 seconds with
this patch set
- In the remaining queries, most are not showing significant
improvement in performance, numbers are,
Query | Head | with patch
3 | 72829.921 | 59915.961
5 | 54815.123 | 55751.214
7 | 41346.71 | 46149.742
8 | 8801.814 | 15049.155
9 | 62928.88 | 59077.909
10 | 62446.136 | 61933.278
Could you please look into this regression case, also let me know if
the setup I am using is something that is expectant to give such
performance for your patch, or is there anything else you might want
to point out. Let me know if you need any more information for these
Experimental setup is as follows:
Scale factor: 20
work_mem = 64 MB
max_parallel_workers_per_gather = 4
shared_buffers = 8GB
effective_cache_size = 10 GB
Additional indexes are on columns (all individually) l_shipdate,
l_shipmode, o_comment, o_orderdate, c_mktsegment.
For the output plans on head and with this set of patch please check
the attached tar folder.
|Next Message||Anastasia Lubennikova||2017-02-01 14:41:34||Cast jsonb to numeric, int, float, bool|
|Previous Message||tushar||2017-02-01 14:31:20||Re: Parallel Index Scans|