Re: [PATCH] Fix out-of-bouds access (src/common/wchar.c)

From: Ranier Vilela <ranier(dot)vf(at)gmail(dot)com>
To: Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>
Cc: Julien Rouhaud <rjuju123(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: [PATCH] Fix out-of-bouds access (src/common/wchar.c)
Date: 2022-02-17 12:19:17
Message-ID: CAEudQAoyyADH9VCQ7RyvWFSxQDQpVK1fzS1AdS2nZndZVo3mzA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Sorry for the break post...

Em qui., 17 de fev. de 2022 às 05:25, Kyotaro Horiguchi <
horikyota(dot)ntt(at)gmail(dot)com> escreveu:

> At Thu, 17 Feb 2022 15:50:09 +0800, Julien Rouhaud <rjuju123(at)gmail(dot)com>
> wrote in
> > On Thu, Feb 17, 2022 at 03:51:26PM +0900, Kyotaro Horiguchi wrote:
> > > So, the function doesn't return 63 for all registered names and wrong
> > > names.
> > >
> > > So other possibilities I can think of are..
> > > - Someone had broken pg_encname_tbl[]
> > > - Cosmic ray hit, or ill memory cell.
> > > - Coverity worked wrong way.
> > >
> > > Could you show the workload for the Coverity warning here?
> >
> > The 63 upthread was hypothetical right? pg_encoding_max_length()
> shouldn't be
>
> I understand that Coverity complaind pg_verify_mbstr_len is fed with
> encoding = 63 by length_in_encoding. I don't know what made Coverity
> think so.
>
I think I found the reason.

>
> > called with user-dependent data (unlike pg_encoding_max_length_sql()),
> so I
> > also don't see any value spending cycles in release builds. The error
> should
> > only happen with bogus code, and assert builds are there to avoid that,
> or
> > corrupted memory and in that case we can't make any promise.
>
> Well, It's more or less what I wanted to say. Thanks.
>
One thing about this thread that may go unnoticed and
that the analysis is done in Windows compilation.

If we're talking about consistency, then the current implementation of
pg_encoding_max_length is
completely inconsistent with the rest of the file's functions, even if it's
to save a few cycles, this is bad practice.

int
pg_encoding_max_length(int encoding)
{
return (PG_VALID_ENCODING(encoding) ?
pg_wchar_table[encoding].maxmblen :
pg_wchar_table[PG_SQL_ASCII].maxmblen);
}

I think that something is wrong with that implementation,
because Coverity has many warnings about this.
Perhaps, this is a mistake, but I'm not convinced yet.

1. One #ifdef with a mistake, the correct is _WIN32 and not WIN32.

(src/common/encnames).
#ifndef WIN32
#define DEF_ENC2NAME(name, codepage) { #name, PG_##name }
#else
#define DEF_ENC2NAME(name, codepage) { #name, PG_##name, codepage }
#endif

Why does this ifdef exist?

If the correct is this?
#define DEF_ENC2NAME(name, codepage) { #name, PG_##name, codepage }

2. This path:
#define DEF_ENC2NAME(name, codepage) { #name, PG_##name }

DEF_ENC2NAME(EUC_JP, 20932),

What happens if pg_encoding_max_length is called
with Database->encoding equals 20932 ?

Can you test the v2 of the patch?

regards,

Ranier Vilela

Attachment Content-Type Size
v2_fix_out_of_bounds_wchar.patch application/octet-stream 999 bytes

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Ranier Vilela 2022-02-17 12:35:06 [PATCH] Fix possible minor memory leak (src/backend/catalog/heap.c)
Previous Message Ashutosh Sharma 2022-02-17 12:15:50 Re: Make mesage at end-of-recovery less scary.