|From:||Etsuro Fujita <fujita(dot)etsuro(at)lab(dot)ntt(dot)co(dot)jp>|
|To:||Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp>|
|Subject:||Re: non-bulk inserts and tuple routing|
|Views:||Raw Message | Whole Thread | Download mbox | Resend email|
(2018/02/13 10:12), Amit Langote wrote:
> On 2018/02/09 21:20, Etsuro Fujita wrote:
>>>> * Please add a brief decsription about partition_oids to the comments for
>>>> this struct.
>>>> @@ -91,6 +91,7 @@ typedef struct PartitionTupleRouting
>>>> PartitionDispatch *partition_dispatch_info;
>>>> int num_dispatch;
>>>> + Oid *partition_oids;
>> Thanks, but one thing I'm wondering is: do we really need this array? I
>> think we could store into PartitionTupleRouting the list of oids returned
>> by RelationGetPartitionDispatchInfo in ExecSetupPartitionTupleRouting,
>> instead. Sorry, I should have commented this in a previous email, but
>> what do you think about that?
> ExecInitPartitionInfo() will have to iterate the list to get the OID of
> the partition to be initialized. Isn't it much cheaper with the array?
Good point! So I agree with adding that array.
>> Here are other comments:
>> o On changes to ExecSetupPartitionTupleRouting:
>> * This is nitpicking, but it would be better to define partrel and
>> part_tupdesc within the if (update_rri_index< num_update_rri&&
>> RelationGetRelid(update_rri[update_rri_index].ri_RelationDesc) ==
>> leaf_oid) block.
>> - ResultRelInfo *leaf_part_rri;
>> + ResultRelInfo *leaf_part_rri = NULL;
>> Relation partrel = NULL;
>> TupleDesc part_tupdesc;
>> Oid leaf_oid = lfirst_oid(cell);
> Sure, done.
>> * Do we need this? For a leaf partition that is already present in the
>> subplan resultrels, the partition's indices (if any) would have already
>> been opened.
>> + /*
>> + * Open partition indices. We wouldn't
>> need speculative
>> + * insertions though.
>> + */
>> + if
>> + leaf_part_rri->ri_IndexRelationDescs == NULL)
>> + ExecOpenIndices(leaf_part_rri,
> You're right. Removed the call.
Thanks for the above changes!
> Updated patch is attached.
Thanks, here are some minor comments:
o On changes to ExecCleanupTupleRouting:
- heap_close(resultRelInfo->ri_RelationDesc, NoLock);
+ if (resultRelInfo)
+ heap_close(resultRelInfo->ri_RelationDesc, NoLock);
You might check at the top of the loop whether resultRelInfo is NULL and
if so do continue. I think that would save cycles a bit.
o On ExecInitPartitionInfo:
+ int firstVarno;
+ Relation firstResultRel;
My old compiler got "variable may be used uninitialized" warnings.
+ * Build the RETURNING projection if any for the partition. Note that
+ * we didn't build the returningList for each partition within the
+ * planner, but simple translation of the varattnos will suffice.
+ * This only occurs for the INSERT case; in the UPDATE/DELETE cases,
+ * ExecInitModifyTable() would've initialized this.
I think the last comment should be the same as for WCO lists: "This only
occurs for the INSERT case or in the case of UPDATE for which we didn't
find a result rel above to reuse."
+ * Initialize result tuple slot and assign its rowtype using the
+ * RETURNING list. We assume the rest will look the same.
+ tupDesc = ExecTypeFromTL(returningList, false);
+ /* Set up a slot for the output of the RETURNING projection(s) */
+ ExecInitResultTupleSlot(estate, &mtstate->ps);
+ ExecAssignResultType(&mtstate->ps, tupDesc);
+ slot = mtstate->ps.ps_ResultTupleSlot;
+ /* Need an econtext too */
+ if (mtstate->ps.ps_ExprContext == NULL)
+ ExecAssignExprContext(estate, &mtstate->ps);
+ econtext = mtstate->ps.ps_ExprContext;
Do we need this initialization? I think we would already have the slot
and econtext initialized when we get here.
Other than that, the patch looks good to me.
Sorry for the delay.
|Next Message||Grigory Smolkin||2018-02-15 13:16:17||Re: autovacuum: change priority of the vacuumed tables|
|Previous Message||Andreas Karlsson||2018-02-15 11:54:34||Re: JIT compiling with LLVM v9.1|