Imagine the following situation. We are working with some variable "var" of unsigned integer type defined by the programmer.
mytype_t var;
The variable length is unknown or can change depending on the compiler implementation. Our task is to correctly print this variable's value using the printf function. What output modifier should we use then? What about "llu", just to be sure?
printf("%llu", (unsigned long long)var);
But what if this variable's type is larger than unsigned long long and has no associated output modifier? This is where uintmax_t comes to help.
Under the language standard, the intmax_t and uintmax_t data types are, respectively, signed and unsigned integer types with the largest length possible. They can be represented through extended integer types. Section 7.18.1.5 of the Standard only requires that intmax_t and uintmax_t should be large enough to store values represented by any other integer data types. Like extended integer types, they are defined in the stdint.h header file together with their smallest and largest values INTMAX_MIN, INTMAX_MAX, and UINTMAX_MAX. For intmax_t and uintmax_t, the "j" letter serves as an input/output modifier. Note also that Visual Studio 2012 and earlier versions don't support it. Since any unsigned integer value can fit into uintmax_t, casting any integer type to this one will guarantee keeping the saved value unchanged. So the correct printing of the "var" variable will look like this:
printf("%ju", (uintmax_t) var);
It's the same with the "scanf" function.
mytype_t var;
scanf("%llu", &var);
This code can cause incorrect value reading if mytype_t is larger than unsigned long long or an overflow of the "var" variable if mytype_t is less than unsigned long long. Correct reading can be ensured in the following way:
mytype_t var;
uintmax_t temp;
scanf("%ju", &temp);
if(temp <= MYTYPE_MAX)
var = temp;
There is one intricate thing about it, though. Some readers using __int128 or its unsigned counterpart may wonder why intmax_t is defined as long long in my clang or gcc compiler, while its size is actually less than that of __int128. You see, neither clang nor gcc treat __int128 as an extended integer type because that would imply changing the intmax_t type, which in its turn would break the ABI-compatibility with other applications.
Imagine you have a program using a function with an intmax_t parameter in a dynamic library. If the compiler changes the value of intmax_t and recompiles the program, the program and the library will start referring to different types, thus breaking the binary compatibility.
In the final analysis, the intmax_t/uintmax_t types don't quite match the purposes specified in the standard.
0