have a direct hardware interpretation.
\section{Normal form}
- The transformations described here have a well-defined goal: To bring the
+ The transformations described here have a well-defined goal: to bring the
program in a well-defined form that is directly translatable to
\VHDL, while fully preserving the semantics of the program. We refer
to this form as the \emph{normal form} of the program. The formal
(like \lam{map}). The graphical version of the architecture contains
a slightly simplified version, since the state tuple packing and
unpacking have been left out. Instead, two separate registers are
- drawn. Also note that most synthesis tools will further optimize
- this architecture by removing the multiplexers at the register input
- and instead put some gates in front of the register's clock input,
- but we want to show the architecture as close to the description as
- possible.
+ drawn. Most synthesis tools will further optimize this architecture by
+ removing the multiplexers at the register input and instead use the write
+ enable port of the register (when it is available), but we want to show
+ the architecture as close to the description as possible.
As you can see from the previous examples, the generation of the final
architecture from the normal form is straightforward. In each of the
\subsection[sec:normalization:intendednormalform]{Intended normal form definition}
Now we have some intuition for the normal form, we can describe how we want
- the normal form to look like in a slightly more formal manner. The following
- EBNF-like description captures most of the intended structure (and
- generates a subset of \GHC's core format).
+ the normal form to look like in a slightly more formal manner. The
+ EBNF-like description in \in{definition}[def:IntendedNormal] captures
+ most of the intended structure (and generates a subset of \GHC's core
+ format).
- There are two things missing: Cast expressions are sometimes
- allowed by the prototype, but not specified here and the below
- definition allows uses of state that cannot be translated to \VHDL
+ There are two things missing from this definition: cast expressions are
+ sometimes allowed by the prototype, but not specified here and the below
+ definition allows uses of state that cannot be translated to \VHDL\
properly. These two problems are discussed in
\in{section}[sec:normalization:castproblems] and
\in{section}[sec:normalization:stateproblems] respectively.
{\defref{intended normal form definition}
\typebufferlam{IntendedNormal}}
- When looking at such a program from a hardware perspective, the
- top level lambda abstractions define the input ports. Lambda
- abstractions cannot appear anywhere else. The variable reference
- in the body of the recursive let expression is the output port.
- Most function applications bound by the let expression define a
- component instantiation, where the input and output ports are
- mapped to local signals or arguments. Some of the others use a
- built-in construction (\eg the \lam{case} expression) or call a
- built-in function (\eg \lam{+} or \lam{map}). For these, a
- hardcoded \small{VHDL} translation is available.
+ When looking at such a program from a hardware perspective, the top
+ level lambda abstractions (\italic{lambda}) define the input ports.
+ Lambda abstractions cannot appear anywhere else. The variable reference
+ in the body of the recursive let expression (\italic{toplet}) is the
+ output port. Most binders bound by the let expression define a
+ component instantiation (\italic{userapp}), where the input and output
+ ports are mapped to local signals (\italic{userarg}). Some of the others
+ use a built-in construction (\eg\ the \lam{case} expression) or call a
+ built-in function (\italic{builtinapp}) such as \lam{+} or \lam{map}.
+ For these, a hardcoded \small{VHDL} translation is available.
\section[sec:normalization:transformation]{Transformation notation}
To be able to concisely present transformations, we use a specific format
against (subexpressions of) the expression to be transformed. We call this a
pattern, because it can contain \emph{placeholders} (variables), which match
any expression or binder. Any such placeholder is said to be \emph{bound} to
- the expression it matches. It is convention to use an uppercase letter (\eg
+ the expression it matches. It is convention to use an uppercase letter (\eg\
\lam{M} or \lam{E}) to refer to any expression (including a simple variable
- reference) and lowercase letters (\eg \lam{v} or \lam{b}) to refer to
+ reference) and lowercase letters (\eg\ \lam{v} or \lam{b}) to refer to
(references to) binders.
For example, the pattern \lam{a + B} will match the expression
The type of this expression is \lam{Word}, so it does not match \lam{a -> b}
and the transformation does not apply. Next, we have two options for the
- next expression to look at: The function position and argument position of
+ next expression to look at: the function position and argument position of
the application. The expression in the argument position is \lam{b}, which
has type \lam{Word}, so the transformation does not apply. The expression in
the function position is:
function position (which makes the second condition false). In the same
way the transformation does not apply to both components of this
expression (\lam{case opcode of Low -> (+); High -> (-)} and \lam{a}), so
- we will skip to the components of the case expression: The scrutinee and
+ we will skip to the components of the case expression: the scrutinee and
both alternatives. Since the opcode is not a function, it does not apply
here.
lambda abstractions, let expressions and pattern matches of case
alternatives). This is a slightly different notion of global versus
local than what \small{GHC} uses internally, but for our purposes
- the distinction \GHC makes is not useful.
+ the distinction \GHC\ makes is not useful.
\defref{global variable} \defref{local variable}
A \emph{hardware representable} (or just \emph{representable}) type or value
is (a value of) a type that we can generate a signal for in hardware. For
example, a bit, a vector of bits, a 32 bit unsigned word, etc. Values that are
- not runtime representable notably include (but are not limited to): Types,
+ not runtime representable notably include (but are not limited to): types,
dictionaries, functions.
\defref{representable}
- A \emph{built-in function} is a function supplied by the Cλash framework, whose
- implementation is not valid Cλash. The implementation is of course valid
- Haskell, for simulation, but it is not expressable in Cλash.
- \defref{built-in function} \defref{user-defined function}
+ A \emph{built-in function} is a function supplied by the Cλash
+ framework, whose implementation is not used to generate \VHDL. This is
+ either because it is no valid Cλash (like most list functions that need
+ recursion) or because a Cλash implementation would be unwanted (for the
+ addition operator, for example, we would rather use the \VHDL addition
+ operator to let the synthesis tool decide what kind of adder to use
+ instead of explicitly describing one in Cλash). \defref{built-in
+ function}
- For these functions, Cλash has a \emph{built-in hardware translation}, so calls
- to these functions can still be translated. These are functions like
- \lam{map}, \lam{hwor} and \lam{length}.
+ These are functions like \lam{map}, \lam{hwor}, \lam{+} and \lam{length}.
- A \emph{user-defined} function is a function for which we do have a Cλash
- implementation available.
+ For these functions, Cλash has a \emph{built-in hardware translation},
+ so calls to these functions can still be translated. Built-in functions
+ must have a valid Haskell implementation, of course, to allow
+ simulation.
+
+ A \emph{user-defined} function is a function for which no built-in
+ translation is available and whose definition will thus need to be
+ translated to Cλash. \defref{user-defined function}
\subsubsection[sec:normalization:predicates]{Predicates}
Here, we define a number of predicates that can be used below to concisely
\stoplambda
This is obviously not what was supposed to happen! The root of this problem is
- the reuse of binders: Identical binders can be bound in different,
+ the reuse of binders: identical binders can be bound in different,
but overlapping scopes. Any variable reference in those
overlapping scopes then refers to the variable bound in the inner
(smallest) scope. There is not way to refer to the variable in the
outer scope. This effect is usually referred to as
- \emph{shadowing}: When a binder is bound in a scope where the
+ \emph{shadowing}: when a binder is bound in a scope where the
binder already had a value, the inner binding is said to
\emph{shadow} the outer binding. In the example above, the \lam{c}
binder was bound outside of the expression and in the inner lambda
desugarer or simplifier emits some bindings that cannot be
normalized (e.g., calls to a
\hs{Control.Exception.Base.patError}) but are not used anywhere
- either. To prevent the \VHDL generation from breaking on these
+ either. To prevent the \VHDL\ generation from breaking on these
artifacts, this transformation removes them.
\todo{Do not use old-style numerals in transformations}
\transexample{unusedlet}{Unused let binding removal}{from}{to}
\subsubsection{Empty let removal}
- This transformation is simple: It removes recursive lets that have no bindings
+ This transformation is simple: it removes recursive lets that have no bindings
(which usually occurs when unused let binding removal removes the last
binding from it).
\subsubsection[sec:normalization:simplelet]{Simple let binding removal}
This transformation inlines simple let bindings, that bind some
- binder to some other binder instead of a more complex expression (\ie
+ binder to some other binder instead of a more complex expression (\ie\
a = b).
This transformation is not needed to get an expression into intended
\in{Section}[section:prototype:polymorphism].
Without this transformation, there would be a \lam{(+)} entity
- in the \VHDL which would just add its inputs. This generates a
+ in the \VHDL\ which would just add its inputs. This generates a
lot of overhead in the \VHDL, which is particularly annoying
when browsing the generated RTL schematic (especially since most
non-alphanumerics, like all characters in \lam{(+)}, are not
- allowed in \VHDL architecture names\footnote{Technically, it is
+ allowed in \VHDL\ architecture names\footnote{Technically, it is
allowed to use non-alphanumerics when using extended
identifiers, but it seems that none of the tooling likes
extended identifiers in filenames, so it effectively does not
This transformation makes all non-recursive lets recursive. In the
end, we want a single recursive let in our normalized program, so all
non-recursive lets can be converted. This also makes other
- transformations simpler: They only need to be specified for recursive
+ transformations simpler: they only need to be specified for recursive
let expressions (and simply will not apply to non-recursive let
expressions until this transformation has been applied).
This transformation ensures that all representable arguments will become
references to local variables. This ensures they will become references
to local signals in the resulting \small{VHDL}, which is required due to
- limitations in the component instantiation code in \VHDL (one can only
+ limitations in the component instantiation code in \VHDL\ (one can only
assign a signal or constant to an input port). By ensuring that all
arguments are always simple variable references, we always have a signal
available to map to the input ports.
function without arguments, but also an argumentless dataconstructors
like \lam{True}) are also simplified. Only local variables generate
signals in the resulting architecture. Even though argumentless
- dataconstructors generate constants in generated \VHDL code and could be
+ dataconstructors generate constants in generated \VHDL\ code and could be
mapped to an input port directly, they are still simplified to make the
normal form more regular.
this.
\subsection{Case normalisation}
+ The transformations in this section ensure that case statements end up
+ in normal form.
+
\subsubsection{Scrutinee simplification}
This transform ensures that the scrutinee of a case expression is always
a simple variable reference.
\transexample{letflat}{Case normalisation}{from}{to}
+ \placeintermezzo{}{
+ \defref{wild binders}
+ \startframedtext[width=7cm,background=box,frame=no]
+ \startalignment[center]
+ {\tfa Wild binders}
+ \stopalignment
+ \blank[medium]
+ In a functional expression, a \emph{wild binder} refers to any
+ binder that is never referenced. This means that even though it
+ will be bound to a particular value, that value is never used.
+
+ The Haskell syntax offers the underscore as a wild binder that
+ cannot even be referenced (It can be seen as introducing a new,
+ anonymous, binder everytime it is used).
+
+ In these transformations, the term wild binder will sometimes be
+ used to indicate that a binder must not be referenced.
+ \stopframedtext
+ }
+
\subsubsection{Case normalization}
This transformation ensures that all case expressions get a form
that is allowed by the intended normal form. This means they
- will become one of: \refdef{intended normal form definition}
+ will become one of:
+
\startitemize
\item An extractor case with a single alternative that picks a field
- from a datatype, \eg \lam{case x of (a, b) -> a}.
+ from a datatype, \eg\ \lam{case x of (a, b) -> a}.
\item A selector case with multiple alternatives and only wild binders, that
makes a choice between expressions based on the constructor of another
- expression, \eg \lam{case x of Low -> a; High -> b}.
+ expression, \eg\ \lam{case x of Low -> a; High -> b}.
\stopitemize
For an arbitrary case, that has \lam{n} alternatives, with
Cn wn,0 ... wn,m -> yn
\stoptrans
- \refdef{wild binder}
Note that this transformation applies to case expressions with any
scrutinee. If the scrutinee is a complex expression, this might
result in duplication of work (hardware). An extra condition to
\subsubsection[sec:transformation:caseremoval]{Case removal}
This transform removes any case expression with a single alternative and
- only wild binders.\refdef{wild binder}
+ only wild binders.\refdef{wild binders}
These "useless" case expressions are usually leftovers from case simplification
on extractor case (see the previous example).
values used in our expression representable. There are two main
transformations that are applied to \emph{all} unrepresentable let
bindings and function arguments. These are meant to address three
- different kinds of unrepresentable values: Polymorphic values,
+ different kinds of unrepresentable values: polymorphic values,
higher-order values and literals. The transformation are described
- generically: They apply to all non-representable values. However,
+ generically: they apply to all non-representable values. However,
non-representable values that do not fall into one of these three
categories will be moved around by these transformations but are
unlikely to completely disappear. They usually mean the program was not
take care of exactly this.
There is one case where polymorphism cannot be completely
- removed: Built-in functions are still allowed to be polymorphic
+ removed: built-in functions are still allowed to be polymorphic
(Since we have no function body that we could properly
- specialize). However, the code that generates \VHDL for built-in
+ specialize). However, the code that generates \VHDL\ for built-in
functions knows how to handle this, so this is not a problem.
\subsubsection[sec:normalization:defunctionalization]{Defunctionalization}
These transformations remove higher-order expressions from our
- program, making all values first-order.
+ program, making all values first-order. The approach used for
+ defunctionalization uses a combination of specialization, inlining and
+ some cleanup transformations, was also proposed in parallel research
+ by Neil Mitchell \cite[mitchell09].
Higher order values are always introduced by lambda abstractions, none
of the other Core expression elements can introduce a function type.
\stoplambda
This example shows a number of higher-order values that we cannot
- translate to \VHDL directly. The \lam{double} binder bound in the let
+ translate to \VHDL\ directly. The \lam{double} binder bound in the let
expression has a function type, as well as both of the alternatives of
the case expression. The first alternative is a partial application of
the \lam{map} built-in function, whereas the second alternative is a
\hs{Bool} Haskell type, which is just an enumerated type.
There is, however, a second type of literal that does not have a
- representable type: Integer literals. Cλash supports using integer
+ representable type: integer literals. Cλash supports using integer
literals for all three integer types supported (\hs{SizedWord},
\hs{SizedInt} and \hs{RangedWord}). This is implemented using
Haskell's \hs{Num} type class, which offers a \hs{fromInteger} method
that converts any \hs{Integer} to the Cλash datatypes.
- When \GHC sees integer literals, it will automatically insert calls to
+ When \GHC\ sees integer literals, it will automatically insert calls to
the \hs{fromInteger} method in the resulting Core expression. For
example, the following expression in Haskell creates a 32 bit unsigned
word with the value 1. The explicit type signature is needed, since
- there is no context for \GHC to determine the type from otherwise.
+ there is no context for \GHC\ to determine the type from otherwise.
\starthaskell
1 :: SizedWord D32
in y + z
\stoplambda
- Looking at this, we could imagine an alternative approach: Create a
+ Looking at this, we could imagine an alternative approach: create a
transformation that removes let bindings that bind identical values.
In the above expression, the \lam{y} and \lam{z} variables could be
merged together, resulting in the more efficient expression:
normal form definition} offers enough freedom to describe all
valid stateful descriptions, but is not limiting enough. It is
possible to write descriptions which are in intended normal
- form, but cannot be translated into \VHDL in a meaningful way
+ form, but cannot be translated into \VHDL\ in a meaningful way
(\eg, a function that swaps two substates in its result, or a
function that changes a substate itself instead of passing it to
a subfunction).
Without going into detail about the exact problems (of which
there are probably more than have shown up so far), it seems
unlikely that these problems can be solved entirely by just
- improving the \VHDL state generation in the final stage. The
+ improving the \VHDL\ state generation in the final stage. The
normalization stage seems the best place to apply the rewriting
needed to support more complex stateful descriptions. This does
of course mean that the intended normal form definition must be
expanding some expression.
\item[q:soundness] Is our system \emph{sound}? Since our transformations
continuously modify the expression, there is an obvious risk that the final
- normal form will not be equivalent to the original program: Its meaning could
+ normal form will not be equivalent to the original program: its meaning could
have changed.
\item[q:completeness] Is our system \emph{complete}? Since we have a complex
system of transformations, there is an obvious risk that some expressions will
not end up in our intended normal form, because we forgot some transformation.
- In other words: Does our transformation system result in our intended normal
+ In other words: does our transformation system result in our intended normal
form for all possible inputs?
\item[q:determinism] Is our system \emph{deterministic}? Since we have defined
no particular order in which the transformation should be applied, there is an
\emph{different} normal forms. They might still both be intended normal forms
(if our system is \emph{complete}) and describe correct hardware (if our
system is \emph{sound}), so this property is less important than the previous
- three: The translator would still function properly without it.
+ three: the translator would still function properly without it.
\stopitemize
Unfortunately, the final transformation system has only been
developed in the final part of the research, leaving no more time
for verifying these properties. In fact, it is likely that the
current transformation system still violates some of these
- properties in some cases and should be improved (or extra conditions
- on the input hardware descriptions should be formulated).
+ properties in some cases (see
+ \in{section}[sec:normalization:non-determinism] and
+ \in{section}[sec:normalization:stateproblems]) and should be improved (or
+ extra conditions on the input hardware descriptions should be formulated).
This is most likely the case with the completeness and determinism
- properties, perhaps als the termination property. The soundness
+ properties, perhaps also the termination property. The soundness
property probably holds, since it is easier to manually verify (each
transformation can be reviewed separately).
\todo{Define β-reduction and η-reduction?}
- Note that the normal form of such a system consists of the set of nodes
- (expressions) without outgoing edges, since those are the expressions to which
- no transformation applies anymore. We call this set of nodes the \emph{normal
- set}. The set of nodes containing expressions in intended normal
- form \refdef{intended normal form} is called the \emph{intended
- normal set}.
+ In such a graph a node (expression) is in normal form if it has no
+ outgoing edges (meaning no transformation applies to it). The set of
+ nodes without outgoing edges is called the \emph{normal set}. Similarly,
+ the set of nodes containing expressions in intended normal form
+ \refdef{intended normal form} is called the \emph{intended normal set}.
From such a graph, we can derive some properties easily:
\startitemize[KR]
sufficient for our goals (but it is a good start).
It should be possible to have a single formal definition of
- meaning for Core for both normal Core compilation by \GHC and for
+ meaning for Core for both normal Core compilation by \GHC\ and for
our compilation to \VHDL. The main difference seems to be that in
hardware every expression is always evaluated, while in software
it is only evaluated if needed, but it should be possible to
Since each of the transformations can be applied to any
subexpression as well, there is a constraint on our meaning
- definition: The meaning of an expression should depend only on the
+ definition: the meaning of an expression should depend only on the
meaning of subexpressions, not on the expressions themselves. For
example, the meaning of the application in \lam{f (let x = 4 in
x)} should be the same as the meaning of the application in \lam{f
each node in the normal set is also in the intended normal set.
Reasoning about our intended normal set is easier, since we know
how to generate it from its definition. \refdef{intended normal
- form definition}.
+ form definition}
Fortunately, we can also prove the complement (which is
equivalent, since $A \subseteq B \Leftrightarrow \overline{B}
- \subseteq \overline{A}$): Show that the set of nodes not in
+ \subseteq \overline{A}$): show that the set of nodes not in
intended normal form is a subset of the set of nodes not in normal
form. In other words, show that for every expression that is not
in intended normal form, that there is at least one transformation
This approach is especially useful for proving completeness of our
system, since if expressions exist to which none of the
- transformations apply (\ie if the system is not yet complete), it
+ transformations apply (\ie\ if the system is not yet complete), it
is immediately clear which expressions these are and adding
(or modifying) transformations to fix this should be relatively
easy.