From: | Leon <leon(at)udmnet(dot)ru> |
---|---|
To: | "Ansley, Michael" <Michael(dot)Ansley(at)intec(dot)co(dot)za> |
Cc: | "'pgsql-hackers(at)postgresql(dot)org'" <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [HACKERS] Lex and things... |
Date: | 1999-08-24 12:27:00 |
Message-ID: | 37C28F94.B8B90BE0@udmnet.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Ansley, Michael wrote:
> >> Hmm. There is something going on to remove fixed length limits
> >> entirely, maybe someone is already doing something to lexer in
> >> that respect? If no, I could look at what can be done there.
> Yes, me. I've removed the query string limit from psql, libpq, and as much
> of the backend as I can see. I have done some (very) preliminary testing,
> and managed to get a 95kB query to execute. However, the two remaining
> problems that I have run into so far are token size (which you have just
> removed, many thanks ;-),
I'm afraid not. There is arbitrary limit (named NAMEDATALEN) in lexer.
If identifier exeeds it, it gets '\0' at that limit, so truncated
effectively. Strings are also limited by MAX_PARSE_BUFFER which is
finally something like QUERY_BUF_SIZE = 8k*2.
Seems that string literals are the primary target, because it is
real-life constraint here now. This is not the case with supposed
huge identifiers. Should I work on it, or will you do it yourself?
> and string literals, which are limited, it seems
> to YY_BUF_SIZE (I think).
--
Leon.
From | Date | Subject | |
---|---|---|---|
Next Message | Ansley, Michael | 1999-08-24 13:13:15 | RE: [HACKERS] Lex and things... |
Previous Message | Leon | 1999-08-24 12:16:44 | Re: [HACKERS] Lex and things... |