Re: Making tab-complete.c easier to maintain

From: David Fetter <david(at)fetter(dot)org>
To: Greg Stark <stark(at)mit(dot)edu>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Making tab-complete.c easier to maintain
Date: 2015-12-09 18:31:06
Message-ID: 20151209183106.GC10778@fetter.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Wed, Dec 09, 2015 at 03:49:20PM +0000, Greg Stark wrote:
> On Wed, Dec 9, 2015 at 2:27 PM, David Fetter <david(at)fetter(dot)org> wrote:
> > Agreed that the "whole new language" aspect seems like way too big a
> > hammer, given what it actually does.
>
> Which would be easier to update when things change?

This question seems closer to being on point with the patch sets
proposed here.

> Which would be possible to automatically generate from gram.y?

This seems like it goes to a wholesale context-aware reworking of tab
completion rather than the myopic ("What has happened within the past N
tokens?", for slowly increasing N) versions of tab completions in both
the current code and in the two proposals here.

A context-aware tab completion wouldn't care how many columns you were
into a target list, or a FROM list, or whatever, as it would complete
based on the (possibly nested) context ("in a target list", e.g.)
rather than on inferences made from some slightly variable number of
previous tokens.

Cheers,
David.
--
David Fetter <david(at)fetter(dot)org> http://fetter.org/
Phone: +1 415 235 3778 AIM: dfetter666 Yahoo!: dfetter
Skype: davidfetter XMPP: david(dot)fetter(at)gmail(dot)com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2015-12-09 18:31:23 Re: Freeze avoidance of very large table.
Previous Message Robert Haas 2015-12-09 18:21:02 Re: parallel joins, and better parallel explain