In C programming, you allocate memory to create an object by asking the runtime for a number of bytes:

x = malloc(16);
if (x == NULL) {
        /* allocation failed - hit memory limit? */
}

However, often what you actually want is an array of objects of a certain size, so you do:

x = malloc(n * sizeof(*x));
if (x == NULL) {
        /* allocation failed - hit memory limit? */
}

However, if the value of n is not known, the multiplication may overflow, which can cause a smaller number of bytes to be returned than requested, which would be bad, so you do:

x = calloc(n, sizeof(*x));
if (x == NULL) {
        /* allocation failed - hit memory limit or overflow? */
}

However, calloc also pre-fills the array with zeroes, which may be an unnecessary performance penalty, and also can’t resize reallocations efficiently in case you want to realloc rather than malloc, so if you’re on a system that has reallocarray(3) (an extension available on BSD and GNU systems), you can:

newptr = reallocarray(oldptr, n, sizeof(*oldptr));
if (newptr == NULL) {
        /* error occurred, clean up oldptr... */
}

reallocarray does not solve one ambiguity. The C standard does not specify the behaviour for zero-sized allocations. Some implementations return NULL to indicate an error, and others return a pointer to inaccessible memory. Does NULL always indicate an error?

For this reason, NetBSD has reallocarr(3), which can replace malloc, realloc, and reallocarray. Since it only updates the result if allocation succeeded, using a temporary variable for oldptr is unnecessary:

if (reallocarr(&ptr, n, sizeof(*ptr)) != 0) {
        /* allocation failed, clean up ptr... */
}

This allows us to improve and simplify allocation in our large and very long-lived C codebase: