JTC1/SC22/WG14
N927
WG14/N927
Another Formalism for Sequence Points
=====================================
D. Hugh Redelmeier
hugh@mimosa.com
2000 November 17
The current concept of sequence points in the C Standard is intended
to allow the description of certain race conditions in the evaluation
of expressions. It seems to have failed to yield a precise
definition. So here is a sketch of a different approach.
The original version of this was much simpler and cleaner.
Unfortunately, it did not capture several important details of C. It
is not evident to me whether this version is simple and clean enough
to be useful.
The Problem
===========
There are two forms of race condition that yield undefined behaviour
in expression evaluation:
If two settings of the same object cannot be proven to be disjoint
in time, the evaluation is considered undefined. Example:
(x = 1) * (x = 2)
If a fetch of an object cannot be proven to be before or after a
setting of that object, the evaluation is considered undefined.
Example:
(x = 1) * x
There is a special rule for function calls that causes otherwise
undefined behaviour to be only unspecified:
The execution of two functions is never interleaved. Even if the
order two function invocations is not specified, one will
certainly complete before the other starts.
Whether the evaluation of an expression has a race condition is a
property that cannot always be determined without performing at least
parts of that evaluation -- it is not always a static property of the
expression.
The Approach
============
The process of determining whether an expression has a race condition
is fairly tricky to describe, so we fall back on a formal description.
We tie our analysis to an abstract syntax tree (AST) of the
expression. In effect, our determination is a kind of evaluation of
the expression, but it yields something other that conventional
values.
There are three kinds of evaluation of C expressions (including
subexpressions).
- for a value (which may then be discarded!)
- for an lvalue
- for a sizeof (for now, we'll ignore this case; the only interesting
issues involve VLAs)
Our determination procedure will calculate, for each subtree in the AST,
properties of the usage of each object. We chose to represent these
properties by membership in sets (this isn't the only possible way).
The goal is to determine what objects have bad properties (i.e. are
involved in race conditions).
Properties of Object Usage
==========================
There are only two properties we are interested in (Undefined and
Unspecified), but our calculation will require the use of a number of
other properties.
The sets that represent properties will have objects as members.
Since C allows objects to overlap in interesting ways, we actually
need to have each byte of each object be separately included. This
may not be enough for bitfields -- it may be necessary to go down to
the bit level. For our purposes, a register can be considered a
(possibly large) single byte object.
The sets we need to know for each subtree include:
Race: (the set of bytes of) objects for which the expression has a race
Set: objects set by the evaluation
Fetched: objects fetched by the evaluation
For subtrees that are evaluated for an lvalue, we also need:
Obj: (the set of bytes of) the object designated by the lvalue
These properties are not mutually exclusive, so an object can be in
any or all of the sets at the same time.
Unfortunately, the properties listed above turn out to be too
simplistic. We need to keep track of more detailed information.
Instead of Race, we need Unspecified and Undefined to distinguish the
two kinds of race condition. Furthermore, we need to distinguish Set
and Fetched done within a function and done outside a function. So we
essentially double the number of sets needed.
Based on an example of Martin Riordan, I think that further refinement
is needed. The example (simplified):
*(x = 2, &x) = 7
Compare this with the expression:
*(&x + (x = 0)) = 7
In the original, the inner assignment to x must occur before the
outermost assignment. Since the two assignments are separated by a
sequence point, the result is well defined.
In the second example, there is no sequence point, so the result will
be undefined.
To perform this analysis, we need to distinguish between setting done
before a sequence point, and setting done since the last sequence
point.
Perhaps surprisingly, without rather contrived examples like this one,
sequence point semantics could be handled without additional
properties -- only careful specification of the rules for computing
the existing properties. It is tempting to think that C should be
changed to eliminate this complexity.
Finally, setting a floating point status flag is "sticky". Although a
flag can be modelled as a variable, it has an additional property:
certain ways of setting the flag to 1 can be done several times is
well defined, even without a separating sequence point. Unlike other
variables, the floating point flags can be implicitly set to 1 by many
floating point operators. This requires that we keep a separate set
Set1flag for each Set set.
I don't know how to handle the sticky bit correctly and concisely. A
naive approach leads to combinatorial explosion. Here is an example.
Let F be a sticky bit.
x + y + F
If F is already set (not a well defined term in itself), there is no
problem. Otherwise, if the leftmost addition would set F, the
expression is undefined (I think). This does not match the ordinary
semantics of assignment -- in the analogous case, the computation
would always be undefined. So the sticky bit appears to require a
parallel set of somewhat distinct machinery.
Simply pretending each setting of the sticky bit involves a
conditional to only set it if it is unset leaves the problem that the
test in this conditional can yield race conditions.
I will leave description of the sticky bit out of this discussion. It
may be that the standard constrains sticky bits in a way that reduces
the problems I perceive. For example, if the only way to fetch a
sticky bit involves a subroutine, and the only way to set it other
than as the side effect of a floating point operation involves a
subroutine, perhaps the interactions are tamed by the sequence points.
So the sets we need to compute, for each node, are now:
Undefined
Unspecified
FetchedDirect
FetchedSub (Fetched within a subroutine evaluation)
Set x {now, before seq. pt.} x {direct, sub}
This last line represents 4 sets all by itself:
SetNowDirect
SetNowSub
SetBeforeDirect
SetBeforeSub
Anything done inside a subroutine is before a sequence point. So
SetNowSub should be empty. We can therefore eliminate it.
It turns out that certain composite properties are useful. The
corresponding sets are purely a combination of the sets described
above. Think of this as a kind of common subexpression elimination in
our presentation:
SetDirect = SetNowDirect | SetBeforeDirect
Set = SetDirect | SetBeforeSub
Direct = FetchedDirect | SetDirect
Sub = FetchedSub | SetBeforeSub
Fetched = FetchedDirect | FetchedSub
Calculating Properties
======================
We calculate properties of subtrees. The properties of a subtree are
calculated based on properties of its subtrees and the nature of the
operator at the root of the subtree.
For each set described previously, we introduce a function of the same
name. The function maps a subtree into the set value for that
subtree. Thus SetNowDirect("x = 1") would yield the set of byte objects
that makes up the variable x.
To make the distinction between evaluation for lvalue and evaluation
for value clear in the AST, we add an explicit operator to convert
from lvalue to value. We will denote this by @. Any subtree that
could be evaluated for lvalue is.
Now that we have those details out of the way, we can start specifying
the rules.
==> For an expression x of the form "left op right", where op is +, -,
*, <<, ... (ordinary binary operators):
FetchedDirect(x) = FetchedDirect(left) | FetchedDirect(right)
FetchedSub = FetchedSub(left) | FetchedSub(right)
SetNowDirect(x) = SetNowDirect(left) | SetNowDirect(right)
SetBeforeDirect(x) = SetBeforeDirect(left) | SetBeforeDirect(right)
SetBeforeSub(x) = SetBeforeSub(left) | SetBeforeSub(right)
Undefined(x) = Undefined(left)
| Undefined(right)
| Direct(left) & Set(right)
| Set(left) & Direct(right)
| SetDirect(left) & Fetched(right)
| Fetched(left) & SetDirect(right)
Unspecified(x) = Unspecified(left) | Unspecified(right)
| Sub(left) & SetSub(right)
| SetSub(left) & Sub(right)
Let's study these rules.
The first five are very simple: the properties simply flow from the
operands. Pretty simple. Too bad the notation is a bit cluttered.
The definition of Undefined is quite intricate. Half the lines can be
understood as mirror images of the other lines due to the symmetry of
these operators. That still leaves three cases to understand. The
first case is that if anything undefined in an operand is undefined in
the whole subexpression (for these operators). The second case is
that if anything directly references or sets an object, this clashes
with a subroutine's setting it. The third case is that a direct
setting conflicts with a subroutine's fetch. Notice that these cases
are not always distinct.
Care has been taken to exclude the Unspecified case from Undefined.
This cannot be handled by set subtraction because cases sometimes
overlap. Thus the last four lines are needed where two would
otherwise have been sufficient.
The definition of Unspecified has one interesting rule, and a copy of
it induced by symmetry.
[Before we added all the distinctions to handle Unspecified vs
Undefined and to handle Martin Riordan's example, the rules were much
simpler:
Race(x) = Race(left) | Race(right)
| Set(left) & Fetched(right)
| Fetched(left) & Set(right)
| Set(left) & Set(right)
Set(x) = Set(left) | Set(right)
Fetched(x) = Fetched(left) | Fetched(right)
Personally, I think much of the appeal of this approach is lost in the
resulting complexity.]
==> For an expression x of the form "op opand" where op is +, -, !,
... (ordinary unary operator), all the definitions simply pass
through the results from the operand:
FetchedDirect(x) = FetchedDirect(opand)
FetchedSub(x) = FetchedSub(opand)
SetNowDirect(x) = SetNowDirect(opand)
SetBeforeDirect(x) = SetBeforeDirect(opand)
SetBeforeSub(x) = SetBeforeSub(opand)
Undefined(x) = Undefined(opand)
Unspecified(x) = Unspecified(opand)
[Just a pass-through -- pretty boring.]
==> For an expression x of the form "obj = val":
FetchedDirect(x) = FetchedDirect(obj) | FetchedDirect(val)
FetchedSub(x) = FetchedSub(obj) | FetchedSub(val)
SetNowDirect(x) = SetNowDirect(obj) | SetNowDirect(val) | Obj(obj)
SetBeforeDirect(x) = SetBeforeDirect(obj) | SetBeforeDirect(val)
SetBeforeSub(x) = SetBeforeSub(obj) | SetBeforeSub(val)
Undefined(x) = Undefined(obj)
| Undefined(val)
| Direct(obj) & Set(val)
| Set(obj) & Direct(val)
| SetDirect(obj) & Fetched(val)
| Fetched(obj) & SetDirect(val)
| Obj(obj) & (SetNowDirect(obj) | SetNowDirect(val))
Unspecified(x) = Unspecified(obj) | Unspecified(val)
| Sub(obj) & SetSub(val)
| SetSub(obj) & Sub(val)
Although assignment is clearly a key operator, the definitions are
very little different from those for an ordinary binary operator. The
first difference is another term in the definition of SetNowDirect --
a pretty obvious one. The second is the last line of Undefined.
This last line of Undefined says that there is a problem if we are
assigning to an object that was assigned while evaluating an operand
and not separated by a sequence point.
It is a bit disturbing that there are so many lines of definition
where only two are "interesting".
==> For an expression x of the form "obj op= val":
FetchedDirect(x) = FetchedDirect(obj) | FetchedDirect(val) | Obj(obj)
FetchedSub(x) = FetchedSub(obj) | FetchedSub(val)
SetNowDirect(x) = SetNowDirect(obj) | SetNowDirect(val) | Obj(obj)
SetBeforeDirect(x) = SetBeforeDirect(obj) | SetBeforeDirect(val)
SetBeforeSub(x) = SetBeforeSub(obj) | SetBeforeSub(val)
Undefined(x) = Undefined(obj)
| Undefined(val)
| Direct(obj) & Set(val)
| Set(obj) & Direct(val)
| SetDirect(obj) & Fetched(val)
| Fetched(obj) & SetDirect(val)
| Obj(obj) & (SetNowDirect(obj) | SetNowDirect(val))
| Obj(obj) & Set(val)
Unspecified(x) = Unspecified(obj) | Unspecified(val)
| Sub(obj) & SetSub(val)
| SetSub(obj) & Sub(val)
This is very close to the definitions for "obj = val". FetchedDirect
is different in the obvious way. A new line has been added to the end
of Undefined: any setting by val is unordered with respect to the
fetching of obj.
==> For an expression x of the form "obj++", "obj--", "++obj", or
"--obj":
FetchedDirect(x) = FetchedDirect(obj) | Obj(obj)
FetchedSub(x) = FetchedSub(obj)
SetNowDirect(x) = SetNowDirect(obj) | Obj(obj)
SetBeforeDirect(x) = SetBeforeDirect(obj)
SetBeforeSub(x) = SetBeforeSub(obj)
Undefined(x) = Undefined(obj)
| Obj(obj) & SetNowDirect(obj)
Unspecified(x) = Unspecified(obj)
This is essentially the same as "obj += 0". Every term involving the
right operand has been replaced by the empty set, and the results
simplified.
Compare these definitions with those for "op opand" -- this one is
more than a simple pass-through.
==> For an expression x of the form "var" (a variable) evaluated for
its lvalue:
Obj(var) = bytes of object designated by var
FetchedDirect(x) = {}
FetchedSub(x) = {}
SetNowDirect(x) = {}
SetBeforeDirect(x) = {}
SetBeforeSub(x) = {}
Undefined(x) = {}
Unspecified(x) = {}
In other words, all property sets are empty, but Obj contains the
variable.
==> For an expression x of the form "@ lval" (an lvalue being
explicitly converted to a value):
FetchedDirect(x) = FetchedDirect(lval) | Obj(lval)
FetchedSub(x) = FetchedSub(lval)
SetNowDirect(x) = SetNowDirect(lval)
SetBeforeDirect(x) = SetBeforeDirect(lval)
SetBeforeSub(x) = SetBeforeSub(lval)
Undefined(x) = Undefined(lval)
| Obj(lval) & SetNowDirect(lval)
Unspecified(x) = Unspecified(lval)
This is almost a straight pass-through. There is an additional term
in FetchedDirect. Undefined can actually occur due to the fetch.
Consider the following example:
(&x)[x = 0]
==> For an expression x that is a manifest constant (for example,
"42"):
FetchedDirect(x) = {}
FetchedSub(x) = {}
SetNowDirect(x) = {}
SetBeforeDirect(x) = {}
SetBeforeSub(x) = {}
Undefined(x) = {}
Unspecified(x) = {}
==> For an expression x of the form "* val", evaluated for lvalue.
Let ob designate the bytes of the object that val points to:
Obj(x) = bytes of object that Val(val) points to
FetchedDirect(x) = FetchedDirect(val)
FetchedSub(x) = FetchedSub(val)
SetNowDirect(x) = SetNowDirect(val)
SetBeforeDirect(x) = SetBeforeDirect(val)
SetBeforeSub(x) = SetBeforeSub(val)
Undefined(x) = Undefined(val)
Unspecified(x) = Unspecified(val)
Except for the definition of Obj, this is a simple pass-through.
[&*x not treated specially. I think it just works.]
==> For an expression x of the form "& lval":
FetchedDirect(x) = FetchedDirect(lval)
FetchedSub(x) = FetchedSub(lval)
SetNowDirect(x) = SetNowDirect(lval)
SetBeforeDirect(x) = SetBeforeDirect(lval)
SetBeforeSub(x) = SetBeforeSub(lval)
Undefined(x) = Undefined(lval)
Unspecified(x) = Unspecified(lval)
This is a pass-through. Nothing interesting is happening, which is
interesting.
==> For an expression x of the form "s.member", evaluated for lvalue:
Obj(x) = the subset of Obj(s) that contains .member
FetchedDirect(x) = FetchedDirect(s)
FetchedSub(x) = FetchedSub(s)
SetNowDirect(x) = SetNowDirect(s)
SetBeforeDirect(x) = SetBeforeDirect(s)
SetBeforeSub(x) = SetBeforeSub(s)
Undefined(x) = Undefined(s)
Unspecified(x) = Unspecified(s)
This is close to a pass-through.
If the struct were instead being returned by a function, it ought to
be a pure value, with no lvalue, so there should be no race
conditions. I vaguely remembered that the committee may have fudged
some aspects (for example, if a struct returned by a function has an
array member, does selecting that array member yield a pointer into
the value?).
==> For an expression x of the form "f(arglist)":
We consider the argument list comma to be a regular binary operator
(but not the normal comma operator!). This means that we need not do
a separate induction over the argument list in this rule.
FetchedDirect(x) = FetchedDirect(f) | FetchedDirect(arglist)
FetchedSub = FetchedSub(f) | FetchedSub(arglist)
| fetched by execution of subroutine
SetNowDirect(x) = {}
SetBeforeDirect(x) = SetBeforeDirect(f) | SetBeforeDirect(arglist)
| SetNowDirect(f) | SetNowDirect(arglist)
SetBeforeSub(x) = SetBeforeSub(f) | SetBeforeSub(arglist)
| set by execution of subroutine
Undefined(x) = Undefined(f)
| Undefined(arglist)
| Direct(f) & Set(arglist)
| Set(f) & Direct(arglist)
| SetDirect(f) & Fetched(arglist)
| Fetched(f) & SetDirect(arglist)
Unspecified(x) = Unspecified(f) | Unspecified(arglist)
| Sub(f) & SetSub(arglist)
| SetSub(f) & Sub(arglist)
Much of this is like an ordinary binary operator, but there are key
differences. Surprisingly, Undefined and Unspecified are unchanged:
they are all about evaluation of the argument list and function
pointer before the call.
Everything set is Before because the function contains sequence points
(this may not actually be true of library routines; what are the
consequences?).
Things set or fetched in the subroutine are recorded in FetchedSub and
SetBeforeSub.
==> For an expression x of the form "left , right" (the sequencing
comma, not the argument list comma):
FetchedDirect(x) = FetchedDirect(left) | FetchedDirect(right)
FetchedSub = FetchedSub(left) | FetchedSub(right)
SetNowDirect(x) = SetNowDirect(right)
SetBeforeDirect(x) = SetBeforeDirect(left) | SetNowDirect(left)
| SetBeforeDirect(right)
SetBeforeSub(x) = SetBeforeSub(left) | SetBeforeSub(right)
Undefined(x) = Undefined(left) | Undefined(right)
Unspecified(x) = Unspecified(left) | Unspecified(right)
These definitions are similar to those for a plain binary operator,
but with some key differences. SetNowDirect does not pass through the
left's SetNowDirect -- that is instead contributed to SetBeforeDirect
because it is now before a sequence point. Undefined and Unspecified
are simple pass-throughs.
==> For an expression x of the form "test ? tex : fex":
If the value of test is true, the definitions are identical to those
for "test , tex".
Otherwise, the definitions are identical to those for "test , fex".
==> For an expression x of the form "left || right:
The definitions are identical to those for "left ? 1 : right"
==> For an expression x of the form "left && right:
The definitions are identical to those for "left ? right : 0"
We're done
==========
The results we actually care about are Undefined and Unspecified for
the whole expression. If Undefined is non-empty, the program has
strayed into Undefined territory. If Unspecified is non-empty, the
values of those objects is not specified (it is constrained, but we
have not computed the constraints).
This mechanism might be useful for other issues. For example,
understanding volatile. In that case, sets might be the wrong
tool to use.
The description could be tightened up, with a possible loss of
clarity, by eliminating all "For an expression x of the form"
preambles and eliminating x from the definitions. An example:
SetNowDirect(left op right) = SetNowDirect(left) | SetNowDirect(right)
where op is +, -, *, <<, ...
A lot of rules are similar. Exploiting this might make the result
less readable, or it might not.
Questions to be answered
========================
- is this approach sufficiently precise?
- is this description accurate (or can it be made accurate)?
- how can we satisfy ourselves that what is specified this way matches
what is specified in the current Standard?
- Are there any surprises uncovered by this process? Some are noted
along the way.
- for what purpose(s) could a description of this style be useful to
WG24?
- is this formalization more useful for some purposes than the others
WG24 has?
- is the terminology acceptable? Can it be improved?
- how can the unformalized bits be tightened up (eg. how to
appear formal and be accurate about the object representation)?
- is it acceptable to leave the analysis of called functions out (we
presume that the analysis is handed to us)?
- Are "bytes of objects" the right granularity for analysis?
- what can and what should be said about volatile?
- can we say anything interesting about union member access?
- these analyses can be recast as static, but must then be only
approximate. Would the resulting analyses be useful (for example,
for generating compile-time diagnostics)? An extreme case: would
they allow the committee to mandate compile-time diagnostics?