JTC1/SC22/WG14
N769
N769 J11/97-133
Mandatory intmax_t
Randy Meyers
23 Sept 1997
(It turns out that day I wrote this paper, Clive Feather independently
proposed the same thing: intmax_t, uintmax_t, and their associated
macros should be required to be defined in inttypes.h. His paper
N765, part C, is relevant here.)
The types intmax_t and uintmax_t and their associated types are
currently optional (as are all other types) in inttypes.h. The
committee had previously decided to make the "max" types optional to
allow the possibility that an implementation had no bound on the size
of its integer types.
This strikes me as a very bad trade off: there do not exist any C or
C++ implementations that do not limit the size of their integer types,
and none are planned are planned as far as I know. On the other hand,
knowing the largest integer type allows for some useful idioms:
some_int_typedef x;
printf("The value is %" PRIdMAX "\n", (intmax_t) x);
I can see two possible scenarios for implementations with no maximum
integer type.
The first is an implementation with a LISP-like bignum type that
dynamically grows as needed to hold values of increasing size. Such a
type would be very different from the integer types. In fact, such a
type would have more in common with VLAs than integers. Since bignum
types have no set size, they would need to be represented as a
reference to heap memory. Bignum types would either have to be
forbidden from appearing in unions, or the standard would have to
admit to the fact that only the pointer to the storage for the bignum
value was a member of the union. Sizeof such a type could not be a
constant expression. I believe that the standard promises enough
about the properties of integer types that bignum types would not
qualify as integers.
The second scenario is a compiler with generalized integer type
handling so that it could support integers of an arbitrary size picked
at compile-time. Consider a syntax for type specifiers like
int<n>
where the n is a integer constant expression that gives the size in
bytes of the integer type. Objects of such a type would be
represented as an block of contiguous bytes storing a value using
binary representation. The operations on such integers would be
compiled into calls to library routines that would be passed the known
at compile-time size of the integers. A program like:
N769 J11/97-133 Page 2
Mandatory intmax_t
int<1000> i, j, k, m;
i = j + k * m;
might generate code like:
__mul(&k, &m, &tmp1, 1000);
__add(&tmp1, &j, &i, 1000);
Such an implementation could handle on demand integers of arbitrary
size. Such integers would meet all requirements of integer types in
the Standard: size known at compile time, a pure binary
representation without funny indirection or counts stored as part of
the integer. And, it would be unreasonable to require that such an
implementation to define intmax_t as int<LONG_MAX>.
However, I believe there is a simple solution. An implementation that
supports arbitrarily large integers picked at compile-time should also
support an command line option to set what intmax_t should be for the
program being compiled. For example, the switch could set the maximum
value of n allowed in int<n> types, and the compiler could verify this
assertion and also define intmax_t.
Since:
1. there are no existing implementations with arbitrarily large
integers, and since
2. if there are such implementations in the future, adding a
command line switch is cheap, and since
3. having a maximum size integer known is useful,
we should require intmax_t, uintmax_t, and their associated macros to
be defined.