kbuild: treat char as always signed

Message ID 20221019162648.3557490-1-Jason@zx2c4.com
State New
Headers
Series kbuild: treat char as always signed |

Commit Message

Jason A. Donenfeld Oct. 19, 2022, 4:26 p.m. UTC
  Recently, some compile-time checking I added to the clamp_t family of
functions triggered a build error when a poorly written driver was
compiled on ARM, because the driver assumed that the naked `char` type
is signed, but ARM treats it as unsigned, and the C standard says it's
architecture-dependent.

I doubt this particular driver is the only instance in which
unsuspecting authors assume that `char` with no `signed` or `unsigned`
designation is signed, because that's how the other types work. We were
lucky enough this time that that driver used `clamp_t(char,
negative_value, positive_value)`, so the new checking code found it, and
I've sent a patch to fix it, but there are likely other places lurking
that won't be so easily unearthed.

So let's just eliminate this particular variety of heisensigned bugs
entirely. Set `-fsigned-char` globally, so that gcc makes the type
signed on all architectures.

Cc: Masahiro Yamada <masahiroy@kernel.org>
Cc: Kees Cook <keescook@chromium.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Link: https://lore.kernel.org/lkml/202210190108.ESC3pc3D-lkp@intel.com/
Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com>
---
 Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Segher Boessenkool Oct. 19, 2022, 4:54 p.m. UTC | #1
On Wed, Oct 19, 2022 at 10:26:48AM -0600, Jason A. Donenfeld wrote:
> Recently, some compile-time checking I added to the clamp_t family of
> functions triggered a build error when a poorly written driver was
> compiled on ARM, because the driver assumed that the naked `char` type
> is signed, but ARM treats it as unsigned, and the C standard says it's
> architecture-dependent.

> So let's just eliminate this particular variety of heisensigned bugs
> entirely. Set `-fsigned-char` globally, so that gcc makes the type
> signed on all architectures.

This is an ABI change.  It is also hugely detrimental to generated
code quality on architectures that make the saner choice (that is, have
most instructions zero-extend byte quantities).

Instead, don't actively disable the compiler warnings that catch such
cases?  So start with removing footguns like

  # disable pointer signed / unsigned warnings in gcc 4.0
  KBUILD_CFLAGS += -Wno-pointer-sign


Segher
  
Linus Torvalds Oct. 19, 2022, 5:14 p.m. UTC | #2
On Wed, Oct 19, 2022 at 9:57 AM Segher Boessenkool
<segher@kernel.crashing.org> wrote:
>
> This is an ABI change.  It is also hugely detrimental to generated
> code quality on architectures that make the saner choice (that is, have
> most instructions zero-extend byte quantities).

Yeah, I agree. We should just accept the standard wording, and be
aware that 'char' has indeterminate signedness.

But:

> Instead, don't actively disable the compiler warnings that catch such
> cases?  So start with removing footguns like
>
>   # disable pointer signed / unsigned warnings in gcc 4.0
>   KBUILD_CFLAGS += -Wno-pointer-sign

Nope, that won't fly.

The pointer-sign thing doesn't actually help (ie it won't find places
where you actually compare a char), and it causes untold damage in
doing completely insane things.

For example, it suddenly starts warning if  you *are* being careful,
and explicitly use 'unsigned char array[]' things to avoid any sign
issues, and then want to do simple and straightforward things with
said array (like doing a 'strcmp()' on it).

Seriously, -Wpointer-sign is not just useless, it's actively _evil_.
The fact that you suggest that clearly means that you've never used
it.

                      Linus
  
Linus Torvalds Oct. 19, 2022, 5:26 p.m. UTC | #3
On Wed, Oct 19, 2022 at 10:14 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> The pointer-sign thing doesn't actually help (ie it won't find places
> where you actually compare a char), and it causes untold damage in
> doing completely insane things.

Side note: several years ago I tried to make up some sane rules to
have 'sparse' actually be able to warn when a 'char' was used in a
context where the sign mattered.

I failed miserably.

You actually can see some signs (heh) of that in the sparse sources,
in that the type system actually has a bit for explicitly signed types
("MOD_EXPLICITLY_SIGNED"), but it ends up being almost entirely
unused.

That bit does still have one particular use: the "bitfield is
dubiously signed" thing where sparse will complain about bitfields
that are implicitly (but not explicitly) signed. Because people really
expect 'int a:1' to have values 0/1, not 0/-1.

But the original intent was to find code where people used a 'char'
that wasn't explicitly signed, and that then had architecture-defined
behavior.

I just could not come up with any even remotely sane warning
heuristics that didn't have a metric buttload of false positives.

I still have this feeling that it *should* be possible to warn about
the situation where you end up doing an implicit type widening (ie the
normal C "arithmetic is always done in at least 'int'") that then does
not get narrowed down again without the upper bits ever mattering.

But it needs somebody smarter than me, I'm afraid.

And the fact that I don't think any other compiler has that warning
either makes me just wonder if my feeling that it should be possible
is just wrong.

                   Linus
  
Segher Boessenkool Oct. 19, 2022, 5:43 p.m. UTC | #4
Hi!

On Wed, Oct 19, 2022 at 10:14:20AM -0700, Linus Torvalds wrote:
> On Wed, Oct 19, 2022 at 9:57 AM Segher Boessenkool
> <segher@kernel.crashing.org> wrote:
> >
> > This is an ABI change.  It is also hugely detrimental to generated
> > code quality on architectures that make the saner choice (that is, have
> > most instructions zero-extend byte quantities).
> 
> Yeah, I agree. We should just accept the standard wording, and be
> aware that 'char' has indeterminate signedness.

And plain "char" is a separate type from "signed char" and "unsigned
char" both.

> But:
> 
> > Instead, don't actively disable the compiler warnings that catch such
> > cases?  So start with removing footguns like
> >
> >   # disable pointer signed / unsigned warnings in gcc 4.0
> >   KBUILD_CFLAGS += -Wno-pointer-sign
> 
> Nope, that won't fly.
> 
> The pointer-sign thing doesn't actually help (ie it won't find places
> where you actually compare a char), and it causes untold damage in
> doing completely insane things.

When I did this more than a decade ago there indeed was a LOT of noise,
mostly caused by dubious code.  I do agree many cases detected are not
very important, but it also revealed cases where a filesystem's disk
format changed (atarifs or amigafs or such iirc) -- many cases it is
annoying to be reminded of sloppy code, but in some cases it detects
crucial problems.

> Seriously, -Wpointer-sign is not just useless, it's actively _evil_.

Then suggest something better?  Or suggest improvements to the existing
warning?

This warning is part of -Wall, most people must not have problems with
it (or people are so apathetic about this that they have not complained
about it).

It is easy to improve your code when the compiler detects problems like
this.  Of course after such a long time of lax code sanity enforcement
you get all warnings at once :-/

> The fact that you suggest that clearly means that you've never used
> it.

Ah, ad hominems.  Great.


Segher
  
Nick Desaulniers Oct. 19, 2022, 6:10 p.m. UTC | #5
On Wed, Oct 19, 2022 at 10:26 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Wed, Oct 19, 2022 at 10:14 AM Linus Torvalds
> <torvalds@linux-foundation.org> wrote:
> >
> > The pointer-sign thing doesn't actually help (ie it won't find places
> > where you actually compare a char), and it causes untold damage in
> > doing completely insane things.
>
> Side note: several years ago I tried to make up some sane rules to
> have 'sparse' actually be able to warn when a 'char' was used in a
> context where the sign mattered.

Do you have examples? Maybe we could turn this into a compiler feature
request.  Having prior art on the problem would be a boon.

>
> I failed miserably.
>
> You actually can see some signs (heh) of that in the sparse sources,
> in that the type system actually has a bit for explicitly signed types
> ("MOD_EXPLICITLY_SIGNED"), but it ends up being almost entirely
> unused.
>
> That bit does still have one particular use: the "bitfield is
> dubiously signed" thing where sparse will complain about bitfields
> that are implicitly (but not explicitly) signed. Because people really
> expect 'int a:1' to have values 0/1, not 0/-1.

Clang's -Wbitfield-constant-conversion can catch that.
commit 5c5c2baad2b5 ("ASoC: mchp-spdiftx: Fix clang
-Wbitfield-constant-conversion")
commit eab9100d9898 ("ASoC: mchp-spdiftx: Fix clang
-Wbitfield-constant-conversion")
commit 37209783c73a ("thunderbolt: Make priority unsigned in struct tb_path")

>
> But the original intent was to find code where people used a 'char'
> that wasn't explicitly signed, and that then had architecture-defined
> behavior.
>
> I just could not come up with any even remotely sane warning
> heuristics that didn't have a metric buttload of false positives.
>
> I still have this feeling that it *should* be possible to warn about
> the situation where you end up doing an implicit type widening (ie the
> normal C "arithmetic is always done in at least 'int'") that then does
> not get narrowed down again without the upper bits ever mattering.
>
> But it needs somebody smarter than me, I'm afraid.
>
> And the fact that I don't think any other compiler has that warning
> either makes me just wonder if my feeling that it should be possible
> is just wrong.
>
>                    Linus
  
Linus Torvalds Oct. 19, 2022, 6:11 p.m. UTC | #6
On Wed, Oct 19, 2022 at 10:45 AM Segher Boessenkool
<segher@kernel.crashing.org> wrote:
>
> When I did this more than a decade ago there indeed was a LOT of noise,
> mostly caused by dubious code.

It really happens with explicitly *not* dubious code.

Using 'unsigned char[]' is very common in code that actually does
anything where you care about the actual byte values. Things like
utf-8 handling, things like compression, lots and lots of cases.

But a number of those cases are still dealing with *strings*. UTF-8 is
still a perfectly valid C string format, and using 'strlen()' on a
buffer that contains UTF-8 is neither unusual nor wrong. It is still
the proper way to get the byte length of the thing. It's how UTF-8 is
literally designed.

And -Wpointer-sign will complain about that, unless you start doing
explicit casting, which is just a worse fix than the disease.

Explicit casts are bad (unless, of course, you are explicitly trying
to violate the type system, when they are both required, and a great
way to say "look, I'm doing something dangerous").

So people who say "just cast it", don't understand that casts *should*
be seen as "this code is doing something special, tread carefully". If
you just randomly add casts to shut up a warning, the casts become
normalized and don't raise the kind of warning signs that they
*should* raise.

And it's really annoying, because the code ends up using 'unsigned
char' exactly _because_ it's trying to be careful and explicit about
signs, and then the warning makes that carefully written code worse.

> Then suggest something better?  Or suggest improvements to the existing
> warning?

As I mentioned in the next email, I tried to come up with something
better in sparse, which wasn't based on the pointer type comparison,
but on the actual 'char' itself.

My (admittedly only ever half-implemented) thing actually worked fine
for the simple cases (where simplification would end up just undoing
all the "expand char to int" because the end use was just assigned to
another char, or it was masked for other reasons).

But while sparse does a lot of basic optimizations, it still left
enough "look, you're doing sign-extensions on a 'char'" on the table
that it warned about perfectly valid stuff.

And maybe that's fundamentally hard.

The "-Wpointer-sign" thing could probably be fairly easily improved,
by just recognizing that things like 'strlen()' and friends do not
care about the sign of 'char', and neither does a 'strcmp()' that only
checks for equality (but if you check the *sign* of strcmp, it does
matter).

It's been some time since I last tried it, but at least from memory,
it really was mostly the standard C string functions that caused
almost all problems.  Your *own* functions you can just make sure the
signedness is right, but it's really really annoying when you try to
be careful about the byte signs, and the compiler starts complaining
just because you want to use the bog-standard 'strlen()' function.

And no, something like 'ustrlen()' with a hidden cast is just noise
for a warning that really shouldn't exist.

So some way to say 'this function really doesn't care about the sign
of this pointer' (and having the compiler know that for the string
functions it already knows about anyway) would probably make almost
all problems with -Wsign-warning go away.

Put another way: 'char *' is so fundamental and inherent in C, that
you can't just warn when people use it in contexts where sign really
doesn't matter.

                 Linus
  
Nick Desaulniers Oct. 19, 2022, 6:20 p.m. UTC | #7
On Wed, Oct 19, 2022 at 11:11 AM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> But while sparse does a lot of basic optimizations, it still left
> enough "look, you're doing sign-extensions on a 'char'" on the table
> that it warned about perfectly valid stuff.
>
> And maybe that's fundamentally hard.
>
> The "-Wpointer-sign" thing could probably be fairly easily improved,
> by just recognizing that things like 'strlen()' and friends do not
> care about the sign of 'char', and neither does a 'strcmp()' that only
> checks for equality (but if you check the *sign* of strcmp, it does
> matter).
>
> It's been some time since I last tried it, but at least from memory,
> it really was mostly the standard C string functions that caused
> almost all problems.  Your *own* functions you can just make sure the
> signedness is right, but it's really really annoying when you try to
> be careful about the byte signs, and the compiler starts complaining
> just because you want to use the bog-standard 'strlen()' function.
>
> And no, something like 'ustrlen()' with a hidden cast is just noise
> for a warning that really shouldn't exist.
>
> So some way to say 'this function really doesn't care about the sign
> of this pointer' (and having the compiler know that for the string
> functions it already knows about anyway) would probably make almost
> all problems with -Wsign-warning go away.
>
> Put another way: 'char *' is so fundamental and inherent in C, that
> you can't just warn when people use it in contexts where sign really
> doesn't matter.

A few times in the past, we've split a warning flag into a group so
that we could be more specific about distinct cases. Perhaps if
-Wpointer-sign was a group that implied -Wpointer-char-sign, then the
kernel could use -Wpointer-sign -Wno-pointer-char-sign.

I don't know if that's the right granularity though.
  
Linus Torvalds Oct. 19, 2022, 6:35 p.m. UTC | #8
On Wed, Oct 19, 2022 at 11:10 AM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> On Wed, Oct 19, 2022 at 10:26 AM Linus Torvalds
> >
> > Side note: several years ago I tried to make up some sane rules to
> > have 'sparse' actually be able to warn when a 'char' was used in a
> > context where the sign mattered.
>
> Do you have examples? Maybe we could turn this into a compiler feature
> request.  Having prior art on the problem would be a boon.

It's been over a decade since I seriously worked on sparse (Hmm.
Probably two, actually).  And I never got the 'char' logic to work
well enough for it to have ever made it into the kernel.

I'm also fairly sure I did it wrong - if I recall correctly, I did it
on the type system level, and the logic was in the tree
simplification, which was always much too weak.

Sparse does *some* expression simplification as it builds up the parse
tree and does all the type evaluations ("evaludate.c" in sparse), but
most of the real optimization work is done on the SSA format.

So what I probably *should* have done was to have a special "BEXT"
opcode (for "byte extend", the same way sparse has ZEXT and SEXT for
well-defined integer zero extend and sign extend), and linearized it
with all the simplifications that we do on the SSA level, and then if
the BEXT opcode still exists after all our optimization work, we'd
warn about it, because that means that the signedness ends up
mattering.

But sparse originally did almost everything just based on the type
system, which was the original intent of sparse (ie the whole "extend
the pointer types to have different address spaces" was really what
sparse was all about).

> Clang's -Wbitfield-constant-conversion can catch that.

Yeah, so bitfield signedness is really trivial, and works all on the
type system.

It's very easy to say: "you defined this member as an implicitly
signed bitfield, did you *really* mean to do that?" because signed
bitfields simply do not exists in the kernel.

So that warning is trivial, and the fix is basically universally
change 'int a:1' to 'unsigned a:1', because even *if* you do want
signed bitfields, it's just better to make that very very explicit,
and write it as 'signed int x:10'.

We do have a couple of signed bitfields in the kernel, but they are
unusual enough that it's actually a good thing that sparse just made
people be explicit about it.

Do

        git grep '\<signed\>.*:[1-9]'

to see the (few) examples and a few false positives that trigger in
the trace docs.

So sparse doesn't actually have to be clever about bitfield signs. It
can literally just say "did you really mean to do that", and that's
it. Very simple. Not at all the complexity that 'char' has, where
every single use technically tends to cause a sign-extension (due to
the integer conversion), but that doesn't mean that it *matters* in
the end.

            Linus
  
Linus Torvalds Oct. 19, 2022, 6:56 p.m. UTC | #9
On Wed, Oct 19, 2022 at 11:21 AM Nick Desaulniers
<ndesaulniers@google.com> wrote:
>
> A few times in the past, we've split a warning flag into a group so
> that we could be more specific about distinct cases. Perhaps if
> -Wpointer-sign was a group that implied -Wpointer-char-sign, then the
> kernel could use -Wpointer-sign -Wno-pointer-char-sign.

That might be interesting, just to see how much of the kernel is about
'char *' and how much is other noise.

Just for fun (for some definition of "fun") I tried to remove the
-Wno-pointer-sign thing, and started building a kernel.

After fixing fortify-string.h to not complain (which was indeed about
strlen() signedness), it turns out a lot were still about 'char', but
not necessarily the <string,h> functions.

We use 'unsigned char *' for our dentry data, for example, and then you get

     warning: pointer targets in initialization of ‘const unsigned
char *’ from ‘char *’ differ in signedness

when you do something like

    QSTR_INIT(NULL_FILE_NAME,

which is simply doing a regular initializer assignment, and wants to
assign a constant string (in this case the constant string "null") to
that "const unsigned char *name".

That's certainly another example of "why the heck did the compiler
warn about that thing".

You can literally try to compile this one-liner with gcc:

     const unsigned char *c = "p";

and it will complain. What a hugely pointless warning.

BUT.

It turns out we have a lot of non-char warnings too.

The kernel does all these "generic functions" that are based on size, like

        atomic_try_cmpxchg_acquire()

which are basically defined to be about "int sized object", but with
unspecified sign.

And the sign is basically pointless. Some people want "unsigned int",
others might want a signed int.

So from a quick grep, we do have a lot of strlen/strcpy cases, but we
also do have a lot of other cases.

Hundreds and hundreds of that atomic_try_cmpxchg_acquire(), for
example. And they might be trivial to fix (it might be similar to the
fortify-string.h one where it's just a header file that generates most
of them in one single place), but with all the ones that are just
clearly the compiler being silly, they aren't really even worth
looking at.

                    Linus
  
Kees Cook Oct. 19, 2022, 7:11 p.m. UTC | #10
On Wed, Oct 19, 2022 at 11:56:00AM -0700, Linus Torvalds wrote:
> Hundreds and hundreds of that atomic_try_cmpxchg_acquire(), for
> example. And they might be trivial to fix (it might be similar to the
> fortify-string.h one where it's just a header file that generates most
> of them in one single place), but with all the ones that are just
> clearly the compiler being silly, they aren't really even worth
> looking at.

Yeah, I've had to fight these casts in fortify-string.h from time to
time. I'd love to see the patch you used -- I bet it would keep future
problems at bay.
  
Andy Shevchenko Oct. 19, 2022, 7:23 p.m. UTC | #11
On Wed, Oct 19, 2022 at 11:35:50AM -0700, Linus Torvalds wrote:
> On Wed, Oct 19, 2022 at 11:10 AM Nick Desaulniers
> <ndesaulniers@google.com> wrote:

...

> We do have a couple of signed bitfields in the kernel, but they are
> unusual enough that it's actually a good thing that sparse just made
> people be explicit about it.

At least drivers/media/usb/msi2500/msi2500.c:289 can be converted
to use sign_extend32() I believe.
  
Linus Torvalds Oct. 19, 2022, 7:30 p.m. UTC | #12
On Wed, Oct 19, 2022 at 12:11 PM Kees Cook <keescook@chromium.org> wrote:
> Yeah, I've had to fight these casts in fortify-string.h from time to
> time. I'd love to see the patch you used -- I bet it would keep future
> problems at bay.

Heh. The fortify-source patch was just literally

  -       unsigned char *__p = (unsigned char *)(p);              \
  +       char *__p = (char *)(p);                                \

in __compiletime_strlen(), just to make the later

        __ret = __builtin_strlen(__p);

happy.

I didn't see any reason that code was using 'unsigned char *', but I
didn't look very closely.

But honestly, while fixing that was just a local thing, a lot of other
cases most definitely weren't.

The crypto code uses 'unsigned char *' a lot - which makes a lot of
sense, since the crypto code really does work basically with a "byte
array", and 'unsigned char *' tends to really be a good way to do
that.

But then a lot of the *users* of the crypto code may have other ideas,
ie they may have strings as the source, where 'char *' is a lot more
natural.

And as mentioned, some of it really is just fairly fundamental
compiler confusion. The fact that you can't use a regular string
literals with 'unsigned char' is just crazy. There's no *advantage* to
that, it's literally just an annoyance.

(And yes, there's u"hello word", but and yes, that's actually
"unsigned char" compatible as of C23, but not because the 'u' is
'unsigned', but because the 'u' stands for 'utf8', and it seems that
the C standard people finally decided that 'unsigned char[]' was the
right type for UTF8. But in C11, it's actually still just 'char *',
and I think that you get that completely broken sign warning unless
you do an explicit cast).

No sane person should think that any of this is reasonable, and C23
actually makes things *WORSE* - not because C23 made the right choice,
but because it just makes the whole signedness even messier.

IOW, signedness is C is such a mess that -Wpointer-sign is actively
detrimental as things are right now. And look above - it's not even
getting better, it's just getting even more confusing and odd.

              Linus
  
Linus Torvalds Oct. 19, 2022, 7:36 p.m. UTC | #13
On Wed, Oct 19, 2022 at 12:23 PM Andy Shevchenko
<andriy.shevchenko@linux.intel.com> wrote:
>
> > We do have a couple of signed bitfields in the kernel, but they are
> > unusual enough that it's actually a good thing that sparse just made
> > people be explicit about it.
>
> At least drivers/media/usb/msi2500/msi2500.c:289 can be converted
> to use sign_extend32() I believe.

Heh. I didn't even look at that one - I did check that yeah, the MIPS
ones made sense (I say "ones", because while my grep pattern only
finds one, there are several others that have spacing that just made
my grep miss them).

You're right, that msi2500 use is a very odd use of bitfields for just
sign extension.

That's hilariously odd code, but not exactly wrong. And using "signed
int x:14" does make it very explicit that the bitfield wants that
sign.

And that code does actually have a fair number of comments to explain
each step, so I think it's all ok. Strange, but ok.

                  Linus
  
Linus Torvalds Oct. 19, 2022, 7:54 p.m. UTC | #14
On Wed, Oct 19, 2022 at 9:27 AM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>
> So let's just eliminate this particular variety of heisensigned bugs
> entirely. Set `-fsigned-char` globally, so that gcc makes the type
> signed on all architectures.

Btw, I do wonder if we might actually be better off doing this - but
doing it the other way around.

IOW, make 'char' always UNsigned. Unlike the signed char thing, it
shouldn't generate any worse code on any common architecture.

And I do think that having odd architecture differences is generally a
bad idea, and making the language rules stricter to avoid differences
is a good thing.

Now, you did '-fsigned-char', because that's the "common default" in
an x86-centric world.

You are also right that people might think that "char" works like
"int", and that if you don't specify the sign, it's signed.

But those people are obviously wrong anyway, so it's not a very strong argument.

And from a kernel perspective, I do think that "treat char as a byte"
and making it be unsigned is in many ways the saner model. There's a
reason we use 'unsigned char' in a fair number of places.

So using '-funsigned-char' might not be a bad idea.

Hilariously (and by "hilariously", I obviously mean "NOT
hilariously"), it doesn't actually fix the warning for

   const unsigned char *c = "p";

which still complains about

   warning: pointer targets in initialization of ‘const unsigned char
*’ from ‘char *’ differ in signedness

even when you've specified that 'char' should be unsigned with -funsigned-char.

Because gcc actually tries to be helpful, and has (reasonably, from a
"type sanity" standpoint) decided that

   "The type char is always a distinct type from each of signed char
or unsigned char, even though its behavior is always just like one of
those two"

so using "-funsigned-char" gives us well-defined *behavior*, but
doesn't really help us with cleaning up our code.

I understand why gcc would want to make it clear that despite any
behavioral issues, "char" is *not* the same as "[un]signed char" in
general. But in this kind of use case, that warning is just pointless
and annoying.

Oh well. You *really* can't win this thing. The game is rigged like
some geeky carnival game.

              Linus
  
Segher Boessenkool Oct. 19, 2022, 8:15 p.m. UTC | #15
On Wed, Oct 19, 2022 at 11:56:00AM -0700, Linus Torvalds wrote:
> After fixing fortify-string.h to not complain (which was indeed about
> strlen() signedness), it turns out a lot were still about 'char', but
> not necessarily the <string,h> functions.
> 
> We use 'unsigned char *' for our dentry data, for example, and then you get
> 
>      warning: pointer targets in initialization of ‘const unsigned
> char *’ from ‘char *’ differ in signedness
> 
> when you do something like
> 
>     QSTR_INIT(NULL_FILE_NAME,
> 
> which is simply doing a regular initializer assignment, and wants to
> assign a constant string (in this case the constant string "null") to
> that "const unsigned char *name".

It cannot see that all users of this are okay with ignoring the
difference.

> That's certainly another example of "why the heck did the compiler
> warn about that thing".

Because this is a simple warning.  It did exactly what it is supposed
to -- you are mixing "char" and "unsigned char" here, and in some cases
that matters hugely.

> You can literally try to compile this one-liner with gcc:
> 
>      const unsigned char *c = "p";
> 
> and it will complain. What a hugely pointless warning.

Yes, there are corner cases like this.  Please open a PR if you want
this fixed.

It is UB to (try to) modify string literals (since they can be shared
for example), but still they have type "array of (plain) char".  This is
historical :-/


Segher
  
Jason A. Donenfeld Oct. 19, 2022, 8:23 p.m. UTC | #16
Hi Linus,

On Wed, Oct 19, 2022 at 12:54:06PM -0700, Linus Torvalds wrote:
> On Wed, Oct 19, 2022 at 9:27 AM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> >
> > So let's just eliminate this particular variety of heisensigned bugs
> > entirely. Set `-fsigned-char` globally, so that gcc makes the type
> > signed on all architectures.
> 
> Btw, I do wonder if we might actually be better off doing this - but
> doing it the other way around.

That could work. The argument here would be that most people indeed
treat char as a byte. I'll send a v2 doing this.

This will probably break some things, though, on drivers that are
already broken on e.g. ARM. For example, the wifi driver I fixed that
started this whole thing would now be broken on x86 too. But also, we're
barely past rc1, so maybe this is something to do now, and then we'll
spend the rest of the 6.1 cycle fixing issues as the pop up? Sounds fine
to me if you think that's doable.

Either way, I'll send a patch and you can do with it what you think is
best.

Jason
  
Jason A. Donenfeld Oct. 19, 2022, 8:35 p.m. UTC | #17
On Wed, Oct 19, 2022 at 12:30:59PM -0700, Linus Torvalds wrote:
> The crypto code uses 'unsigned char *' a lot - which makes a lot of
> sense, since the crypto code really does work basically with a "byte
> array", and 'unsigned char *' tends to really be a good way to do
> that.

I wish folks would use `u8 *` when they mean "byte array".

Maybe the attitude should just be -- use u8 for bytes, s8 for signed
bytes, and char for characters/strings. Declare any use of char for
something non-stringy forbidden, and call it a day. Yes, obviously u8
and s8 are just typedefs, but they're a lot more explicit of intent.

Jason
  
David Laight Oct. 19, 2022, 8:58 p.m. UTC | #18
From: Linus Torvalds
> Sent: 19 October 2022 20:54
> 
> On Wed, Oct 19, 2022 at 9:27 AM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> >
> > So let's just eliminate this particular variety of heisensigned bugs
> > entirely. Set `-fsigned-char` globally, so that gcc makes the type
> > signed on all architectures.
> 
> Btw, I do wonder if we might actually be better off doing this - but
> doing it the other way around.
> 
> IOW, make 'char' always UNsigned. Unlike the signed char thing, it
> shouldn't generate any worse code on any common architecture.
> 
> And I do think that having odd architecture differences is generally a
> bad idea, and making the language rules stricter to avoid differences
> is a good thing.
> 
> Now, you did '-fsigned-char', because that's the "common default" in
> an x86-centric world.

I'm pretty sure char is signed because the pdp11 only had
sign-extending byte loads.

> You are also right that people might think that "char" works like
> "int", and that if you don't specify the sign, it's signed.

But even 'unsigned char' works like int.
The values are promoted to int (thanks to the brain-dead ANSI-C
committee) rather than unsigned int (which I think was in K&R C).
(There is an exception, int, short and char can all be the same size.
In which case unsigned char promotes to unsigned int.)

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
  
David Laight Oct. 19, 2022, 9:07 p.m. UTC | #19
From: Linus Torvalds
> Sent: 19 October 2022 19:11
...
> Explicit casts are bad (unless, of course, you are explicitly trying
> to violate the type system, when they are both required, and a great
> way to say "look, I'm doing something dangerous").

The worst ones in the kernel are the __force ones for sparse.
They really ought to be a function (#define) so that they
are not seen by the compiler at all.
Otherwise they can hide a multitude of sins.

There are also the casts to convert integer values to/from unsigned.
and to different sized integers.
They all happen far too often and can hide things.
A '+ 0u' will convert into to unsigned int without a cast.
Casts really ought to be rare.
Even the casts to from (void *) (for 'buffers') can usually be
made implicit in a function call argument.

	David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
Registration No: 1397386 (Wales)
  
Segher Boessenkool Oct. 19, 2022, 9:26 p.m. UTC | #20
On Wed, Oct 19, 2022 at 09:07:01PM +0000, David Laight wrote:
> From: Linus Torvalds
> > Sent: 19 October 2022 19:11
> > Explicit casts are bad (unless, of course, you are explicitly trying
> > to violate the type system, when they are both required, and a great
> > way to say "look, I'm doing something dangerous").

> Casts really ought to be rare.

Sometimes you need casts for *data*, like where you write  (u32)smth
because you really want the low 32 bits of that something.  That only
happens in some kinds of code -- multi-precision integer, some crypto,
serialisation primitives.

You often want casts for varargs, too.  The alternative is to make very
certain some other way that the actual arguments will have the correct
type, but that is often awkward to do, and not as clear to read.

Pointer casts are almost always a mistake.  If you think you want one
you are almost always wrong.


Segher
  
Linus Torvalds Oct. 20, 2022, 12:10 a.m. UTC | #21
On Wed, Oct 19, 2022 at 1:35 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>
> I wish folks would use `u8 *` when they mean "byte array".

Together with '-funsigned-char', we could typedef 'u8' to just 'char'
(just for __KERNEL__ code, though!), and then we really could just use
'strlen()' and friends on said kind of arrays without any warnings.

But we do have a *lot* of 'unsigned char' users, so it would be a huge
amount of churn to do this kind of thing.

And as mentioned, right now we definitely have a lot of other "ignore
sign" code.

Much of it is probably simply because we haven't been able to ever use
that warning flag, so it's just accumulated and might be trivial to
fix. But I wouldn't be surprised at all if some of it ends up somewhat
fundamental.

              Linus
  
Jason A. Donenfeld Oct. 20, 2022, 3:11 a.m. UTC | #22
On Wed, Oct 19, 2022 at 6:11 PM Linus Torvalds
<torvalds@linux-foundation.org> wrote:
>
> On Wed, Oct 19, 2022 at 1:35 PM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
> >
> > I wish folks would use `u8 *` when they mean "byte array".
>
> Together with '-funsigned-char', we could typedef 'u8' to just 'char'
> (just for __KERNEL__ code, though!), and then we really could just use
> 'strlen()' and friends on said kind of arrays without any warnings.
>
> But we do have a *lot* of 'unsigned char' users, so it would be a huge
> amount of churn to do this kind of thing.

I think, though, there's an argument to be made that every use of
`unsigned char` is much better off as a `u8`. We don't have any C23
fancy unicode strings. As far as I can tell, the only usage of
`unsigned char` ought to be "treat this as a byte array", and that's
what u8 is for. Yea, that'd be churn. But technically, it wouldn't
really be difficult churn: If naive-sed mangles that, I'm sure
Coccinelle would be up to the task. If you think that's a wise
direction, I can play with it and see how miserable it is to do.

(As a sidebar, Sultan and I were discussing today... I find the
radical extension of this idea to its logical end somewhat attractive:
exclusively using u64, s64, u32, s32, u16, s16, u8, s8, uword (native
size), sword (native size), char (string/character). It'd hardly look
like C any more, though, and the very mention of the idea is probably
triggering for some. So I'm not actually suggesting we do that in
earnest. But there is some appeal.)

Jason
  
kernel test robot Oct. 20, 2022, 8:39 a.m. UTC | #23
Hi Jason,

I love your patch! Perhaps something to improve:

[auto build test WARNING on masahiroy-kbuild/for-next]
[also build test WARNING on linus/master v6.1-rc1]
[cannot apply to next-20221020]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Jason-A-Donenfeld/kbuild-treat-char-as-always-signed/20221020-113408
base:   https://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild.git for-next
patch link:    https://lore.kernel.org/r/20221019162648.3557490-1-Jason%40zx2c4.com
patch subject: [PATCH] kbuild: treat char as always signed
config: s390-allmodconfig
compiler: s390-linux-gcc (GCC) 12.1.0
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # https://github.com/intel-lab-lkp/linux/commit/574c38568c2c4a092070a42616218871cf6b7ff9
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Jason-A-Donenfeld/kbuild-treat-char-as-always-signed/20221020-113408
        git checkout 574c38568c2c4a092070a42616218871cf6b7ff9
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross W=1 O=build_dir ARCH=s390 SHELL=/bin/bash drivers/s390/block/ drivers/s390/char/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All warnings (new ones prefixed by >>):

   drivers/s390/block/dasd.c: In function '__dasd_process_cqr':
>> drivers/s390/block/dasd.c:1912:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    1912 |         case DASD_CQR_ERROR:
         |         ^~~~
   drivers/s390/block/dasd.c:1915:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    1915 |         case DASD_CQR_CLEARED:
         |         ^~~~
   drivers/s390/block/dasd.c:1909:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    1909 |         case DASD_CQR_SUCCESS:
         |         ^~~~
   drivers/s390/block/dasd.c: In function 'dasd_flush_device_queue':
   drivers/s390/block/dasd.c:2113:17: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    2113 |                 case DASD_CQR_QUEUED:
         |                 ^~~~
   drivers/s390/block/dasd.c:2102:17: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    2102 |                 case DASD_CQR_IN_IO:
         |                 ^~~~
   drivers/s390/block/dasd.c: In function '__dasd_cancel_req':
   drivers/s390/block/dasd.c:2615:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    2615 |         case DASD_CQR_QUEUED:
         |         ^~~~
   drivers/s390/block/dasd.c:2619:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
    2619 |         case DASD_CQR_IN_IO:
         |         ^~~~
--
   drivers/s390/block/dasd_3990_erp.c: In function 'dasd_3990_handle_env_data':
>> drivers/s390/block/dasd_3990_erp.c:861:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
     861 |         case 0x80:  /* Format 8 - Additional Device Equipment Checks */
         |         ^~~~
   drivers/s390/block/dasd_3990_erp.c:914:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
     914 |         case 0x90:  /* Format 9 - Device Read, Write, and Seek Checks */
         |         ^~~~
   drivers/s390/block/dasd_3990_erp.c:943:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
     943 |         case 0xF0:              /* Format F - Cache Storage Checks */
         |         ^~~~
--
   drivers/s390/char/sclp_vt220.c: In function 'sclp_vt220_receiver_fn':
>> drivers/s390/char/sclp_vt220.c:536:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
     536 |         case SCLP_VT220_SESSION_STARTED:
         |         ^~~~


vim +1912 drivers/s390/block/dasd.c

^1da177e4c3f41 Linus Torvalds     2005-04-16  1902  
5c618c0cf451f1 Sebastian Ott      2018-05-24  1903  static void __dasd_process_cqr(struct dasd_device *device,
5c618c0cf451f1 Sebastian Ott      2018-05-24  1904  			       struct dasd_ccw_req *cqr)
^1da177e4c3f41 Linus Torvalds     2005-04-16  1905  {
fc19f381b3828a Stefan Haberland   2009-03-26  1906  	char errorstring[ERRORLENGTH];
f24acd4503270e Horst Hummel       2005-05-01  1907  
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1908  	switch (cqr->status) {
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1909  	case DASD_CQR_SUCCESS:
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1910  		cqr->status = DASD_CQR_DONE;
d54853ef8cb172 Martin Schwidefsky 2007-02-05  1911  		break;
8e09f21574ea30 Stefan Weinhuber   2008-01-26 @1912  	case DASD_CQR_ERROR:
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1913  		cqr->status = DASD_CQR_NEED_ERP;
d54853ef8cb172 Martin Schwidefsky 2007-02-05  1914  		break;
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1915  	case DASD_CQR_CLEARED:
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1916  		cqr->status = DASD_CQR_TERMINATED;
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1917  		break;
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1918  	default:
fc19f381b3828a Stefan Haberland   2009-03-26  1919  		/* internal error 12 - wrong cqr status*/
fc19f381b3828a Stefan Haberland   2009-03-26  1920  		snprintf(errorstring, ERRORLENGTH, "12 %p %x02", cqr, cqr->status);
fc19f381b3828a Stefan Haberland   2009-03-26  1921  		dev_err(&device->cdev->dev,
fc19f381b3828a Stefan Haberland   2009-03-26  1922  			"An error occurred in the DASD device driver, "
fc19f381b3828a Stefan Haberland   2009-03-26  1923  			"reason=%s\n", errorstring);
8e09f21574ea30 Stefan Weinhuber   2008-01-26  1924  		BUG();
d54853ef8cb172 Martin Schwidefsky 2007-02-05  1925  	}
5c618c0cf451f1 Sebastian Ott      2018-05-24  1926  	if (cqr->callback)
5c618c0cf451f1 Sebastian Ott      2018-05-24  1927  		cqr->callback(cqr, cqr->callback_data);
5c618c0cf451f1 Sebastian Ott      2018-05-24  1928  }
5c618c0cf451f1 Sebastian Ott      2018-05-24  1929
  
Gabriel Paubert Oct. 20, 2022, 10:41 a.m. UTC | #24
On Wed, Oct 19, 2022 at 11:11:16AM -0700, Linus Torvalds wrote:
> On Wed, Oct 19, 2022 at 10:45 AM Segher Boessenkool
> <segher@kernel.crashing.org> wrote:
> >
> > When I did this more than a decade ago there indeed was a LOT of noise,
> > mostly caused by dubious code.
> 
> It really happens with explicitly *not* dubious code.

Indeed.

[snip]
> The "-Wpointer-sign" thing could probably be fairly easily improved,
> by just recognizing that things like 'strlen()' and friends do not
> care about the sign of 'char', and neither does a 'strcmp()' that only
> checks for equality (but if you check the *sign* of strcmp, it does
> matter).

I must miss something, the strcmp man page says:

"The comparison is done using unsigned characters."

But it's not for this that I wrote this message. Has anybody considered
using transparent unions?

They've been heavily used by userland networking code to pass pointer to
sockets, and they work reasonably well in that context IMHO.

So a very wild idea might to make string handling functions accept
transparent union of "char *" and "unsigned char *".

I've not even tried to write any code in this direction, so it's very
likely that this idea won't fly, and it clearly does not solve all
problems. It also probably needs a lot of surgery to avoid clashing with
GCC builtins and unfortunately lose some optimizations.

	Gabriel

> 
> It's been some time since I last tried it, but at least from memory,
> it really was mostly the standard C string functions that caused
> almost all problems.  Your *own* functions you can just make sure the
> signedness is right, but it's really really annoying when you try to
> be careful about the byte signs, and the compiler starts complaining
> just because you want to use the bog-standard 'strlen()' function.
> 
> And no, something like 'ustrlen()' with a hidden cast is just noise
> for a warning that really shouldn't exist.
> 
> So some way to say 'this function really doesn't care about the sign
> of this pointer' (and having the compiler know that for the string
> functions it already knows about anyway) would probably make almost
> all problems with -Wsign-warning go away.
> 
> Put another way: 'char *' is so fundamental and inherent in C, that
> you can't just warn when people use it in contexts where sign really
> doesn't matter.
> 
>                  Linus
  
Jason A. Donenfeld Oct. 20, 2022, 4:33 p.m. UTC | #25
On Thu, Oct 20, 2022 at 2:40 AM kernel test robot <lkp@intel.com> wrote:
> >> drivers/s390/block/dasd.c:1912:9: warning: case label value exceeds maximum value for type [-Wswitch-outside-range]
>     1912 |         case DASD_CQR_ERROR:

Just to save other readers the momentary "huh?" that I experienced,
this warning/error is from the -fsigned-char patch. We ultimately went
with (or are trying to go with) the -funsigned-char approach instead.
So safely ignore this kernel test bot error, as it applies to v1
rather than the v2 here:
https://lore.kernel.org/lkml/20221019203034.3795710-1-Jason@zx2c4.com/

Jason
  
Linus Torvalds Oct. 21, 2022, 10:46 p.m. UTC | #26
On Thu, Oct 20, 2022 at 3:41 AM Gabriel Paubert <paubert@iram.es> wrote:
>
> I must miss something, the strcmp man page says:
>
> "The comparison is done using unsigned characters."

You're not missing anything, I just hadn't looked at strcmp() in forever.

Yeah, strcmp clearly doesn't care about the signedness of 'char', and
arguably an unsigned char argument makes more sense considering the
semantics of the funmction.

> But it's not for this that I wrote this message. Has anybody considered
> using transparent unions?

I don't love the transparent union-as-argument syntax, but you're
right, that would fix the warning.

Except it then doesn't actually *work* very well.

Try this:

        #include <sys/types.h>

        #if USE_UNION
        typedef union {
                const char *a;
                const signed char *b;
                const unsigned char *c;
        } conststring_arg __attribute__ ((__transparent_union__));
        size_t strlen(conststring_arg);
        #else
        size_t strlen(const char *);
        #endif

        int test(char *a, unsigned char *b)
        {
                return strlen(a)+strlen(b);
        }

        int test2(void)
        {
                return strlen("hello");
        }

and now compile it both ways with

        gcc -DUSE_UNION -Wall -O2 -S t.c
        gcc -Wall -O2 -S t.c

and notice how yes, the "-DUSE_UNION" one silences the warning about
using 'unsigned char *' for strlen. So it seems to work fine.

But then look at the code it generates for 'test2()" in the two cases.

The transparent union version actually generates a function call to an
external 'strlen()' function.

The regular version uses the compiler builtin, and just compiles
test2() to return the constant value 5.

So playing games with anonymous union arguments ends up also disabling
all the compiler optimizations we do want, becaue apparently gcc then
decides "ok, I'm not going to warn about you declaring this
differently, but I'm also not going to use the regular one because you
declared it differently".

This, btw, is also the reason why we don't use --freestanding in the
kernel. We do want the basic <string.h> things to just DTRT.

For the sockaddr_in games, the above isn't an issue. For strlen() and
friends, it very much is.

                       Linus
  
Gabriel Paubert Oct. 22, 2022, 6:06 a.m. UTC | #27
On Fri, Oct 21, 2022 at 03:46:01PM -0700, Linus Torvalds wrote:
> On Thu, Oct 20, 2022 at 3:41 AM Gabriel Paubert <paubert@iram.es> wrote:
> >
> > I must miss something, the strcmp man page says:
> >
> > "The comparison is done using unsigned characters."
> 
> You're not missing anything, I just hadn't looked at strcmp() in forever.
> 
> Yeah, strcmp clearly doesn't care about the signedness of 'char', and
> arguably an unsigned char argument makes more sense considering the
> semantics of the funmction.
> 
> > But it's not for this that I wrote this message. Has anybody considered
> > using transparent unions?
> 
> I don't love the transparent union-as-argument syntax, but you're
> right, that would fix the warning.

I'm not in love with the syntax either.

> 
> Except it then doesn't actually *work* very well.
> 
> Try this:
> 
>         #include <sys/types.h>
> 
>         #if USE_UNION
>         typedef union {
>                 const char *a;
>                 const signed char *b;
>                 const unsigned char *c;
>         } conststring_arg __attribute__ ((__transparent_union__));
>         size_t strlen(conststring_arg);
>         #else
>         size_t strlen(const char *);
>         #endif
> 
>         int test(char *a, unsigned char *b)
>         {
>                 return strlen(a)+strlen(b);
>         }
> 
>         int test2(void)
>         {
>                 return strlen("hello");
>         }
> 
> and now compile it both ways with
> 
>         gcc -DUSE_UNION -Wall -O2 -S t.c
>         gcc -Wall -O2 -S t.c
> 

Ok, I´ve just tried it, except that I had something slightly different in
mind, but perhaps should have been clearer in my first post.

I have change your code to the following:


#include <sys/types.h>

#if USE_UNION
typedef union {
	const char *a;
	const signed char *b;
	const unsigned char *c;
} conststring_arg __attribute__ ((__transparent_union__));
static inline size_t strlen(conststring_arg p)
{
	return __builtin_strlen(p.a);
}
#else
size_t strlen(const char *);
#endif

int test(char *a, unsigned char *b)
{
	return strlen(a)+strlen(b);
}

int test2(void)
{
	return strlen("hello");
}

> and notice how yes, the "-DUSE_UNION" one silences the warning about
> using 'unsigned char *' for strlen. So it seems to work fine.
> 
> But then look at the code it generates for 'test2()" in the two cases.

Now test2 looks properly optimized.

This is a bit exploiting a compiler loophole, it calls an external
function which has been defined with the same name!

Depending on how you look at it, it's either disgusting or clever.

I don´t have clang installed, so I don't know whether it would swallow
this code or react with a strong allergy.

	Gabriel
> 
> The transparent union version actually generates a function call to an
> external 'strlen()' function.
> 
> The regular version uses the compiler builtin, and just compiles
> test2() to return the constant value 5.
> 
> So playing games with anonymous union arguments ends up also disabling
> all the compiler optimizations we do want, becaue apparently gcc then
> decides "ok, I'm not going to warn about you declaring this
> differently, but I'm also not going to use the regular one because you
> declared it differently".
> 
> This, btw, is also the reason why we don't use --freestanding in the
> kernel. We do want the basic <string.h> things to just DTRT.
> 
> For the sockaddr_in games, the above isn't an issue. For strlen() and
> friends, it very much is.
> 
>                        Linus
  
Linus Torvalds Oct. 22, 2022, 6:16 p.m. UTC | #28
On Fri, Oct 21, 2022 at 11:06 PM Gabriel Paubert <paubert@iram.es> wrote:
>
> Ok, I´ve just tried it, except that I had something slightly different in
> mind, but perhaps should have been clearer in my first post.
>
> I have change your code to the following:

I actually tested that, but using a slightly different version, and my
non-union test case ended up like

   size_t strlen(const char *p)
  {
        return __builtin_strlen(p);
  }

and then gcc actually complains about

    warning: infinite recursion detected

and I (incorrectly) thought this was unworkable. But your version
seems to work fine.

So yeah, for the kernel I think we could do something like this. It's
ugly, but it gets rid of the crazy warning.

Practically speaking this might be a bit painful, because we've got
several different variations of this all due to all the things like
our debugging versions (see <linux/fortify-string.h> for example), so
some of our code is this crazy jungle of "with this config, use this
wrapper".

But if somebody wants to deal with the '-Wpointer-sign' warnings,
there does seem to be a way out. Maybe with another set of helper
macros, creating those odd __transparent_union__ wrappers might even
end up reasonable.

It's not like we don't have crazy macros for function wrappers
elsewhere (the SYSCALL macros come to mind - shudder). The macros
themselves may be a nasty horror, but when done right the _use_ point
of said macros can be nice and clean.

                  Linus
  
Gabriel Paubert Oct. 23, 2022, 8:23 p.m. UTC | #29
On Sat, Oct 22, 2022 at 11:16:33AM -0700, Linus Torvalds wrote:
> On Fri, Oct 21, 2022 at 11:06 PM Gabriel Paubert <paubert@iram.es> wrote:
> >
> > Ok, I´ve just tried it, except that I had something slightly different in
> > mind, but perhaps should have been clearer in my first post.
> >
> > I have change your code to the following:
> 
> I actually tested that, but using a slightly different version, and my
> non-union test case ended up like
> 
>    size_t strlen(const char *p)
>   {
>         return __builtin_strlen(p);
>   }
> 
> and then gcc actually complains about
> 
>     warning: infinite recursion detected
> 
> and I (incorrectly) thought this was unworkable. But your version
> seems to work fine.

Incidentally, it also gives exactly the same code with -ffreestanding.

> 
> So yeah, for the kernel I think we could do something like this. It's
> ugly, but it gets rid of the crazy warning.

Not as ugly as casts IMO, and it's localized in a few header files.

However, it does not solve the problem of assigning a constant string to
an u8 *; I've no idea on how to fix that.

> 
> Practically speaking this might be a bit painful, because we've got
> several different variations of this all due to all the things like
> our debugging versions (see <linux/fortify-string.h> for example), so
> some of our code is this crazy jungle of "with this config, use this
> wrapper".

I've just had a look at that code, and I don't want to touch it with a
10 foot pole. If someone else to get his hands dirty... 

	Gabriel

> 
> But if somebody wants to deal with the '-Wpointer-sign' warnings,
> there does seem to be a way out. Maybe with another set of helper
> macros, creating those odd __transparent_union__ wrappers might even
> end up reasonable.
> 
> It's not like we don't have crazy macros for function wrappers
> elsewhere (the SYSCALL macros come to mind - shudder). The macros
> themselves may be a nasty horror, but when done right the _use_ point
> of said macros can be nice and clean.
> 
>                   Linus
  
Kees Cook Oct. 25, 2022, 11 p.m. UTC | #30
On Sun, Oct 23, 2022 at 10:23:56PM +0200, Gabriel Paubert wrote:
> On Sat, Oct 22, 2022 at 11:16:33AM -0700, Linus Torvalds wrote:
> > Practically speaking this might be a bit painful, because we've got
> > several different variations of this all due to all the things like
> > our debugging versions (see <linux/fortify-string.h> for example), so
> > some of our code is this crazy jungle of "with this config, use this
> > wrapper".
> 
> I've just had a look at that code, and I don't want to touch it with a
> 10 foot pole. If someone else to get his hands dirty... 

Heh. Yes, fortify-string.h is a twisty maze. I've tried to keep it as
regular as possible, but I admit it is weird. On my list is to split
compile-time from run-time logic (as suggested by Linus a while back),
but I've worried it would end up spilling some of the ugly back into
string.h, which should probably not happen. As such, I've tried to keep
it all contained in fortify-string.h.

Regardless, I think I'd rather avoid yet more special cases in the
fortify code, so I'd like to avoid using transparent union if we can. It
seems like -funsigned-char and associated fixes will be sufficient,
though, yes?
  
Jason A. Donenfeld Oct. 26, 2022, 12:04 a.m. UTC | #31
On Tue, Oct 25, 2022 at 04:00:30PM -0700, Kees Cook wrote:
> On Sun, Oct 23, 2022 at 10:23:56PM +0200, Gabriel Paubert wrote:
> > On Sat, Oct 22, 2022 at 11:16:33AM -0700, Linus Torvalds wrote:
> > > Practically speaking this might be a bit painful, because we've got
> > > several different variations of this all due to all the things like
> > > our debugging versions (see <linux/fortify-string.h> for example), so
> > > some of our code is this crazy jungle of "with this config, use this
> > > wrapper".
> > 
> > I've just had a look at that code, and I don't want to touch it with a
> > 10 foot pole. If someone else to get his hands dirty... 
> 
> Heh. Yes, fortify-string.h is a twisty maze. I've tried to keep it as
> regular as possible, but I admit it is weird. On my list is to split
> compile-time from run-time logic (as suggested by Linus a while back),
> but I've worried it would end up spilling some of the ugly back into
> string.h, which should probably not happen. As such, I've tried to keep
> it all contained in fortify-string.h.
> 
> Regardless, I think I'd rather avoid yet more special cases in the
> fortify code, so I'd like to avoid using transparent union if we can. It
> seems like -funsigned-char and associated fixes will be sufficient,
> though, yes?

I thought some of the motivation behind the transparent union was that
gcc still treats `char` as a distinct type from `unsigned char`, so
gcc's checker can still get upset and warn when passing a u8[] to a
string handling function that expects a char[]. (Once the
-funsigned-char changes go in, though, we should probably decide that
s8[] is never a valid string.)

Jason
  
Rasmus Villemoes Oct. 26, 2022, 12:10 a.m. UTC | #32
On 19/10/2022 21.54, Linus Torvalds wrote:
> On Wed, Oct 19, 2022 at 9:27 AM Jason A. Donenfeld <Jason@zx2c4.com> wrote:
>>
>> So let's just eliminate this particular variety of heisensigned bugs
>> entirely. Set `-fsigned-char` globally, so that gcc makes the type
>> signed on all architectures.
> 
> Btw, I do wonder if we might actually be better off doing this - but
> doing it the other way around.

Only very tangentially related (because it has to do with chars...): Can
we switch our ctype to be ASCII only, just as it was back in the good'ol
mid 90s [i.e. before
https://git.kernel.org/pub/scm/linux/kernel/git/mpe/linux-fullhistory.git/commit/lib/ctype.c?id=036b97b05489161be06e63be77c5fad9247d23ff].

It bugs me that it's almost-but-not-quite-latin1, that toupper() isn't
idempotent, and that one can hit an isalpha() with toupper() and get
something that isn't isalpha().

Rasmus
  
Kees Cook Oct. 26, 2022, 3:41 p.m. UTC | #33
On Wed, Oct 26, 2022 at 02:04:30AM +0200, Jason A. Donenfeld wrote:
> ... (Once the
> -funsigned-char changes go in, though, we should probably decide that
> s8[] is never a valid string.)

Yeah, that's my goal too.
  
Linus Torvalds Oct. 26, 2022, 6:10 p.m. UTC | #34
On Tue, Oct 25, 2022 at 5:10 PM Rasmus Villemoes
<linux@rasmusvillemoes.dk> wrote:
>
> Only very tangentially related (because it has to do with chars...): Can
> we switch our ctype to be ASCII only, just as it was back in the good'ol
> mid 90s

Those US-ASCII days weren't really very "good" old days, but I forget
why we did this (it's attributed to me, but that's from the
pre-BK/pre-git days before we actually tracked things all that well,
so..)

Anyway, I think anybody using ctype.h on 8-bit chars gets what they
deserve, and I think Latin1 (or something close to it) is better than
US-ASCII, in that it's at least the same as Unicode in the low 8
chars.

So no, I'm disinclined to go back in time to what I think is an even
worse situation. Latin1 isn't great, but it sure beats US-ASCII. And
if you really want just US-ASII, then don't use the high bit, and make
your disgusting 7-bit code be *explicitly* 7-bit.

Now, if there are errors in that table wrt Latin1 / "first 256
codepoints of Unicode" too, then we can fix those.

Not that anybody has apparently cared since 2.0.1 was released back in
July of 1996 (btw, it's sad how none of the old linux git archive
creations seem to have tried to import the dates, so you have to look
those up separately)

And if nobody has cared since 1996, I don't really think it matters.

But fundamentally, I think anybody calling US-ASCII "good" is either
very very very confused, or is comparing it to EBCDIC.

                 Linus
  
Rasmus Villemoes Oct. 27, 2022, 7:59 a.m. UTC | #35
On 26/10/2022 20.10, Linus Torvalds wrote:
> On Tue, Oct 25, 2022 at 5:10 PM Rasmus Villemoes
> <linux@rasmusvillemoes.dk> wrote:
>>
>> Only very tangentially related (because it has to do with chars...): Can
>> we switch our ctype to be ASCII only, just as it was back in the good'ol
>> mid 90s
> 
> Those US-ASCII days weren't really very "good" old days, but I forget
> why we did this (it's attributed to me, but that's from the
> pre-BK/pre-git days before we actually tracked things all that well,
> so..)
> 
> Anyway, I think anybody using ctype.h on 8-bit chars gets what they
> deserve, and I think Latin1 (or something close to it) is better than
> US-ASCII, in that it's at least the same as Unicode in the low 8
> chars.

My concern is that it's currently somewhat ill specified what our ctype
actually represents, and that would be a lot easier to specify if we
just said ASCII, everything above 0x7f is neither punct or ctrl or alpha
or anything else.

For example, people may do stuff like isprint(c) ? c : '.' in a printk()
call, but most likely the consumer (somebody doing dmesg) would, at
least these days, use utf-8, so that just results in a broken utf-8
sequence. Now I see that a lot of callers actually do "isascii(c) &&
isprint(c)", so they already know about this, but there are also many
instances where isprint() is used by itself.

There's also stuff like fs/afs/cell.c and other places that use
isprint/isalnum/... to make decisions on what is allowed on the wire
and/or in a disk format, where it's then hard to reason about just
exactly what is accepted. And places that use toupper() on their strings
to normalize them; that's broken when toupper() isn't idempotent.

> So no, I'm disinclined to go back in time to what I think is an even
> worse situation. Latin1 isn't great, but it sure beats US-ASCII. And
> if you really want just US-ASII, then don't use the high bit, and make
> your disgusting 7-bit code be *explicitly* 7-bit.
> 
> Now, if there are errors in that table wrt Latin1 / "first 256
> codepoints of Unicode" too, then we can fix those.

AFAICT, the differences are:

- 0xaa (FEMININE ORDINAL INDICATOR), 0xb5 (MICRO SIGN), 0xba (FEMININE
ORDINAL INDICATOR) should be lower (hence alpha and alnum), not punct.

- depending a little on just exactly what one wants latin1 to mean, but
if it does mean "first 256 codepoints of Unicode", 0x80-0x9f should be cntrl

- for some reason at least glibc seems to classify 0xa0 as punctuation
and not space (hence also as isgraph)

- 0xdf and 0xff are correctly classified as lower, but since they don't
have upper-case versions (at least not any that are representable in
latin1), correct toupper() behaviour is to return them unchanged, but we
just subtract 0x20, so 0xff becomes 0xdf which isn't isupper() and 0xdf
becomes something that isn't even isalpha().

Fixing the first would create more instances of the last, and I think
the only sane way to fix that would be a 256 byte lookup table to use by
toupper().

> Not that anybody has apparently cared since 2.0.1 was released back in
> July of 1996 
(btw, it's sad how none of the old linux git archive
> creations seem to have tried to import the dates, so you have to look
> those up separately)

Huh? That commit has 1996 as the author date, while its commit date is
indeed 2007. The very first line says:

author	linus1 <torvalds@linuxfoundation.org>	1996-07-02 11:00:00 -0600

> And if nobody has cared since 1996, I don't really think it matters.

Indeed, I don't think it's a huge problem in practice. But it still
bothers me that such a simple (and usually overlooked) corner of the
kernel's C library is ill-defined and arguably a little buggy.

Rasmus
  
Linus Torvalds Oct. 27, 2022, 6:28 p.m. UTC | #36
On Thu, Oct 27, 2022 at 12:59 AM Rasmus Villemoes
<linux@rasmusvillemoes.dk> wrote:
>
> AFAICT, the differences are:
>
> - 0xaa (FEMININE ORDINAL INDICATOR), 0xb5 (MICRO SIGN), 0xba (FEMININE
> ORDINAL INDICATOR) should be lower (hence alpha and alnum), not punct.
>
> - depending a little on just exactly what one wants latin1 to mean, but
> if it does mean "first 256 codepoints of Unicode", 0x80-0x9f should be cntrl
>
> - for some reason at least glibc seems to classify 0xa0 as punctuation
> and not space (hence also as isgraph)
>
> - 0xdf and 0xff are correctly classified as lower, but since they don't
> have upper-case versions (at least not any that are representable in
> latin1), correct toupper() behaviour is to return them unchanged, but we
> just subtract 0x20, so 0xff becomes 0xdf which isn't isupper() and 0xdf
> becomes something that isn't even isalpha().

Heh.

Honestly, I don't think we should care at all.

For the byte range 128-255, anybody who uses ctype on them gets what
they get. In the kernel, the most likely use of it is for 'isprint()',
and if those care, they can (and some do) use 'isascii()' in addition.

I don't know if you realize, but the kernel already says "screw libc",
and makes all the isxyz() things just cast the argument to 'unsigned
char', and doesn't care about EOF.

And for the rest, let's just call it the "kernel locale", and just
admit that the kernel locale is entirely historical.

Boom - problem solved, and it's entirely standards conformant (apart
possibly from the EOF case, I think that is marked as a "lower case
character" right now ;)

Looking through

    https://pubs.opengroup.org/onlinepubs/9699919799/

I'm not actually seeing anything that says that we don't do *exactly*
what the standard requires.

You thinking that the kernel locale is US-ASCII is just wrong.

              Linus
  

Patch

diff --git a/Makefile b/Makefile
index f41ec8c8426b..f1abcaf7110e 100644
--- a/Makefile
+++ b/Makefile
@@ -562,7 +562,7 @@  KBUILD_AFLAGS   := -D__ASSEMBLY__ -fno-PIE
 KBUILD_CFLAGS   := -Wall -Wundef -Werror=strict-prototypes -Wno-trigraphs \
 		   -fno-strict-aliasing -fno-common -fshort-wchar -fno-PIE \
 		   -Werror=implicit-function-declaration -Werror=implicit-int \
-		   -Werror=return-type -Wno-format-security \
+		   -Werror=return-type -Wno-format-security -fsigned-char \
 		   -std=gnu11
 KBUILD_CPPFLAGS := -D__KERNEL__
 KBUILD_RUSTFLAGS := $(rust_common_flags) \