x86, processor-flags: Fix the datatypes and add bit number defines
authorH. Peter Anvin <hpa@linux.intel.com>
Sat, 27 Apr 2013 23:11:17 +0000 (16:11 -0700)
committerWilly Tarreau <w@1wt.eu>
Tue, 7 Jun 2016 08:42:44 +0000 (10:42 +0200)
commita47831b0d8428904ef290ad06e3acbd3bb5a8312
tree7c9b1b1abcc9bc76f03323e3810496965b7efd29
parentf85cb76155fb908b966a422a1a4f6b5f7cce5de2
x86, processor-flags: Fix the datatypes and add bit number defines

commit d1fbefcb3aa608599a3c9e4582cbeeb6ba6c8939 upstream.

The control registers are unsigned long (32 bits on i386, 64 bits on
x86-64), and so make that manifest in the data type for the various
constants.  Add defines with a _BIT suffix which defines the bit
number, as opposed to the bit mask.

This should resolve some issues with ~bitmask that Linus discovered.

Reported-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Link: http://lkml.kernel.org/n/tip-cwckhbrib2aux1qbteaebij0@git.kernel.org
[wt: backported to 3.10 only to keep next patch clean]

Signed-off-by: Willy Tarreau <w@1wt.eu>
arch/x86/include/uapi/asm/processor-flags.h