Visual Studio stdint.h seems to have the following typedefs:
typedef signed char int8_t;
typedef short int16_t;
typedef int int32_t;
typedef long long int64_t;
typedef unsigned char uint8_t;
typedef unsigned short uint16_t;
typedef unsigned int uint32_t;
typedef unsigned long long uint64_t;
However, sized integer types use the __intN
syntax, as described here: https://learn.microsoft.com/en-us/cpp/cpp/int8-int16-int32-int64?view=msvc-170
Is there any difference (for example) between using int32_t
versus using __int32
?
I guess I am a little confused if the purpose of the int32_t
typedef is to be the ANSI C99 standard to abstract away the specific compiler syntax (so you can use int32_t
in both Visual Studio and gcc C code for example), then I'm not sure why the typedef in Visual Studio wouldn't be: typedef __int32 int32_t;
Say that the codebase has the following:
#ifdef _MSC_VER
typedef __int64 PROD_INT64;
#else
typedef int64_t PROD_INT64;
#endif
And it uses PROD_INT64
everywhere for a 64-bit signed integer, and it is compiled in both Visual Studio and gcc.
Can it simply use the int64_t
in both Visual Studio and gcc? It would seem this is changing __int64
for long long
in Visual Studio.
CodePudding user response:
Q: Is there any difference (for example) between using int32_t versus using __int32?
A: Yes:
- int32_t and friends are standard fixed width integer types (since C99)
- "__" is "reserved":
https://stackoverflow.com/a/25090719/421195
C standard says (section 7.1.3):
All identifiers that begin with an underscore and either an uppercase letter or another underscore are always reserved for any use.
All identifiers that begin with an underscore are always reserved for use as identifiers with file scope in both the ordinary and tag name spaces.
What this means is that for example, the implementation (either the compiler or a standard header) can use the name __FOO for anything it likes. If you define that identifier in your own code, your program's behavior is undefined. If you're "lucky", you'll be using an implementation that doesn't happen to define it, and your program will work as expected.*
In other words, for any NEW code, you should use "int32_t".
CodePudding user response:
sized integer types use the
__intN
syntax, as described here: https://learn.microsoft.com/en-us/cpp/cpp/int8-int16-int32-int64?view=msvc-170
Your wording suggests that you think the __intN
syntax is somehow more correct or fundamental than all other alternatives. That's not what the doc you link says. It simply defines what those particular forms mean. In Microsoft C, those are preferred over Microsoft's older, single-underscore forms (_intN
), but there's no particular reason to think that they are to be preferred over other alternatives, such as the intN_t
forms available when you include stdint.h
. The key distinguishing characteristic of the __intN
types is that they are built in, available without including any particular header.
Is there any difference (for example) between using
int32_t
versus using__int32
?
On Windows,
int32_t
is the same type as__int32
, but the former is standard C, whereas the latter is not.You need to include
stdint.h
to useint32_t
, whereas__int32
is built in to MSVC.
I'm not sure why the typedef in Visual Studio wouldn't be:
typedef __int32 int32_t
;
It's an implementation decision that may or may not have a well-considered reason. As long as the implementation provides correct definitions -- and there's no reason to think MSVC is doing otherwise -- you shouldn't care about the details.
Say that the codebase has the following:
#ifdef _MSC_VER typedef __int64 PROD_INT64; #else typedef int64_t PROD_INT64; #endif
And it uses
PROD_INT64
everywhere for a 64-bit signed integer, and it is compiled in both Visual Studio and gcc.Can it simply use the
int64_t
in both Visual Studio and gcc?
Yes, and that's certainly what I would do.
It would seem this is changing
__int64
forlong long
in Visual Studio.
Which is a distinction without a difference. Both of those spellings give you the same type in MSVC.
CodePudding user response:
From my top comments ...
stdint.h
is provided by the compiler rather than libc or the OS. It provides portable guarantees (e.g. int32_t
will be 32 bits). The compiler designers could do:
typedef __int32 int32_t;
Or, they can do:
typedef int int32_t;
The latter is what most stdint.h
files do (since they don't have the __int*
types).
Probably, the VS compiler designers just grabbed a copy of the standard stdint.h and didn't bother to change it.
Your point is valid, it's just a design choice (or lack of it) that the compiler writers made. Just use the standard/portable int32_t
and don't worry ;-)
Historical note: stdint.h
is relatively recent. In the 1980s, MS had [16 bit] MS/DOS. Many mc68000 based micros at the time defined int
to be 32 bits. But, on the MS C compiler, int
was 16 bits because that fit the 8086 arch best.
stdint.h
didn't exist back then. But, if it did, it would need:
typedef long int32_t;
because long
was the only way to define a 32 bit integer for the MS 8086 compiler.
When 64 bit machines became available, POSIX compliant machines allowed long
to "float" with the arch/mode. It was 32 bits on 32 bit arches, and 64 bits on 64 arches. This is the LP64
memory model.
Here's the original rationale: https://unix.org/version2/whatsnew/lp64_wp.htm
But, because of MS's longstanding use of long
to be a 32 bit integer, it couldn't do this. Too many programs written in the 8086 days would break if recompiled.
IIRC [and I could be wrong]:
- MS came up with
__int64
andLONGLONG
as types. - They had to define [yet another] abstract type for pointers [remember
near
andfar
pointers, anyone ;-)?]
So, IMO, it was, in part, because of all the MS craziness that prompted the creation of stdint.h
in the first place.