X-Git-Url: https://git.stderr.nl/gitweb?p=matthijs%2Fmaster-project%2Freport.git;a=blobdiff_plain;f=Chapters%2FNormalization.tex;h=922322bcda1901bba315d1afb35429ec8a69d481;hp=5a3fecb802f5833565eefe87df5e9601d910c1a8;hb=efc1953cd015844a2d969a9c9b6e633569891033;hpb=788930024f24adb36067d90b246621b2fa528328 diff --git a/Chapters/Normalization.tex b/Chapters/Normalization.tex index 5a3fecb..922322b 100644 --- a/Chapters/Normalization.tex +++ b/Chapters/Normalization.tex @@ -21,22 +21,22 @@ \stopcombination } - The first step in the core to \small{VHDL} translation process, is normalization. We - aim to bring the core description into a simpler form, which we can + The first step in the Core to \small{VHDL} translation process, is normalization. We + aim to bring the Core description into a simpler form, which we can subsequently translate into \small{VHDL} easily. This normal form is needed because - the full core language is more expressive than \small{VHDL} in some + the full Core language is more expressive than \small{VHDL} in some areas (higher-order expressions, limited polymorphism using type - classes, etc.) and because core can describe expressions that do not + classes, etc.) and because Core can describe expressions that do not have a direct hardware interpretation. \section{Normal form} - The transformations described here have a well-defined goal: To bring the + The transformations described here have a well-defined goal: to bring the program in a well-defined form that is directly translatable to \VHDL, while fully preserving the semantics of the program. We refer to this form as the \emph{normal form} of the program. The formal definition of this normal form is quite simple: - \placedefinition{}{\startboxed A program is in \emph{normal form} if none of the + \placedefinition[force]{}{\startboxed A program is in \emph{normal form} if none of the transformations from this chapter apply.\stopboxed} Of course, this is an \quote{easy} definition of the normal form, since our @@ -65,26 +65,6 @@ other expression. \stopitemize - \todo{Intermezzo: functions vs plain values} - - A very simple example of a program in normal form is given in - \in{example}[ex:MulSum]. As you can see, all arguments to the function (which - will become input ports in the generated \VHDL) are at the outer level. - This means that the body of the inner lambda abstraction is never a - function, but always a plain value. - - As the body of the inner lambda abstraction, we see a single (recursive) - let expression, that binds two variables (\lam{mul} and \lam{sum}). These - variables will be signals in the generated \VHDL, bound to the output port - of the \lam{*} and \lam{+} components. - - The final line (the \quote{return value} of the function) selects the - \lam{sum} signal to be the output port of the function. This \quote{return - value} can always only be a variable reference, never a more complex - expression. - - \todo{Add generated VHDL} - \startbuffer[MulSum] alu :: Bit -> Word -> Word -> Word alu = λa.λb.λc. @@ -125,13 +105,33 @@ ncline(add)(sum); \stopuseMPgraphic - \placeexample[here][ex:MulSum]{Simple architecture consisting of a + \placeexample[][ex:MulSum]{Simple architecture consisting of a multiplier and a subtractor.} \startcombination[2*1] {\typebufferlam{MulSum}}{Core description in normal form.} {\boxedgraphic{MulSum}}{The architecture described by the normal form.} \stopcombination + \todo{Intermezzo: functions vs plain values} + + A very simple example of a program in normal form is given in + \in{example}[ex:MulSum]. As you can see, all arguments to the function (which + will become input ports in the generated \VHDL) are at the outer level. + This means that the body of the inner lambda abstraction is never a + function, but always a plain value. + + As the body of the inner lambda abstraction, we see a single (recursive) + let expression, that binds two variables (\lam{mul} and \lam{sum}). These + variables will be signals in the generated \VHDL, bound to the output port + of the \lam{*} and \lam{+} components. + + The final line (the \quote{return value} of the function) selects the + \lam{sum} signal to be the output port of the function. This \quote{return + value} can always only be a variable reference, never a more complex + expression. + + \todo{Add generated VHDL} + \in{Example}[ex:MulSum] showed a function that just applied two other functions (multiplication and addition), resulting in a simple architecture with two components and some connections. There is of @@ -196,7 +196,7 @@ ncline(mux)(res) "posA(out)"; \stopuseMPgraphic - \placeexample[here][ex:AddSubAlu]{Simple \small{ALU} supporting two operations.} + \placeexample[][ex:AddSubAlu]{Simple \small{ALU} supporting two operations.} \startcombination[2*1] {\typebufferlam{AddSubAlu}}{Core description in normal form.} {\boxedgraphic{AddSubAlu}}{The architecture described by the normal form.} @@ -224,7 +224,8 @@ -> State (Word, Word) -> (State (Word, Word), Word) - -- All arguments are an inital lambda (address, data, packed state) + -- All arguments are an initial lambda + -- (address, data, packed state) regbank = λa.λd.λsp. -- There are nested let expressions at top level let @@ -307,7 +308,7 @@ \stopuseMPgraphic \todo{Don't split registers in this image?} - \placeexample[here][ex:NormalComplete]{Simple architecture consisting of an adder and a + \placeexample[][ex:NormalComplete]{Simple architecture consisting of an adder and a subtractor.} \startcombination[2*1] {\typebufferlam{NormalComplete}}{Core description in normal form.} @@ -318,13 +319,14 @@ \subsection[sec:normalization:intendednormalform]{Intended normal form definition} Now we have some intuition for the normal form, we can describe how we want - the normal form to look like in a slightly more formal manner. The following - EBNF-like description captures most of the intended structure (and - generates a subset of \GHC's core format). + the normal form to look like in a slightly more formal manner. The + EBNF-like description in \in{definition}[def:IntendedNormal] captures + most of the intended structure (and generates a subset of \GHC's Core + format). - There are two things missing: Cast expressions are sometimes - allowed by the prototype, but not specified here and the below - definition allows uses of state that cannot be translated to \VHDL + There are two things missing from this definition: cast expressions are + sometimes allowed by the prototype, but not specified here and the below + definition allows uses of state that cannot be translated to \VHDL\ properly. These two problems are discussed in \in{section}[sec:normalization:castproblems] and \in{section}[sec:normalization:stateproblems] respectively. @@ -372,20 +374,20 @@ | \italic{builtinapp} \stopbuffer - \placedefinition[][def:IntendedNormal]{Definition of the intended nnormal form using an \small{EBNF}-like syntax.} + \placedefinition[][def:IntendedNormal]{Definition of the intended normal form using an \small{EBNF}-like syntax.} {\defref{intended normal form definition} \typebufferlam{IntendedNormal}} - When looking at such a program from a hardware perspective, the - top level lambda abstractions define the input ports. Lambda - abstractions cannot appear anywhere else. The variable reference - in the body of the recursive let expression is the output port. - Most function applications bound by the let expression define a - component instantiation, where the input and output ports are - mapped to local signals or arguments. Some of the others use a - built-in construction (\eg the \lam{case} expression) or call a - built-in function (\eg \lam{+} or \lam{map}). For these, a - hardcoded \small{VHDL} translation is available. + When looking at such a program from a hardware perspective, the top + level lambda abstractions (\italic{lambda}) define the input ports. + Lambda abstractions cannot appear anywhere else. The variable reference + in the body of the recursive let expression (\italic{toplet}) is the + output port. Most binders bound by the let expression define a + component instantiation (\italic{userapp}), where the input and output + ports are mapped to local signals (\italic{userarg}). Some of the others + use a built-in construction (\eg\ the \lam{case} expression) or call a + built-in function (\italic{builtinapp}) such as \lam{+} or \lam{map}. + For these, a hard-coded \small{VHDL} translation is available. \section[sec:normalization:transformation]{Transformation notation} To be able to concisely present transformations, we use a specific format @@ -410,12 +412,12 @@ Nevertheless, we will more precisely specify their meaning below: \startdesc{} The expression pattern that will be matched - against (subexpressions of) the expression to be transformed. We call this a + against (sub-expressions of) the expression to be transformed. We call this a pattern, because it can contain \emph{placeholders} (variables), which match any expression or binder. Any such placeholder is said to be \emph{bound} to - the expression it matches. It is convention to use an uppercase letter (\eg + the expression it matches. It is convention to use an uppercase letter (\eg\ \lam{M} or \lam{E}) to refer to any expression (including a simple variable - reference) and lowercase letters (\eg \lam{v} or \lam{b}) to refer to + reference) and lowercase letters (\eg\ \lam{v} or \lam{b}) to refer to (references to) binders. For example, the pattern \lam{a + B} will match the expression @@ -472,9 +474,9 @@ \stopdesc To understand this notation better, the step by step application of - the η-abstraction transformation to a simple \small{ALU} will be - shown. Consider η-abstraction, which is a common transformation from - labmda calculus, described using above notation as follows: + the η-expansion transformation to a simple \small{ALU} will be + shown. Consider η-expansion, which is a common transformation from + lambda calculus, described using above notation as follows: \starttrans E \lam{E :: a -> b} @@ -482,12 +484,12 @@ λx.E x \lam{E} is not a lambda abstraction. \stoptrans - η-abstraction is a well known transformation from lambda calculus. What + η-expansion is a well known transformation from lambda calculus. What this transformation does, is take any expression that has a function type and turn it into a lambda expression (giving an explicit name to the argument). There are some extra conditions that ensure that this transformation does not apply infinitely (which are not necessarily part - of the conventional definition of η-abstraction). + of the conventional definition of η-expansion). Consider the following function, in Core notation, which is a fairly obvious way to specify a simple \small{ALU} (Note that it is not yet in normal form, but @@ -504,7 +506,7 @@ High -> (-) \stoplambda - There are a few subexpressions in this function to which we could possibly + There are a few sub-expressions in this function to which we could possibly apply the transformation. Since the pattern of the transformation is only the placeholder \lam{E}, any expression will match that. Whether the transformation applies to an expression is thus solely decided by the @@ -584,7 +586,7 @@ The type of this expression is \lam{Word}, so it does not match \lam{a -> b} and the transformation does not apply. Next, we have two options for the - next expression to look at: The function position and argument position of + next expression to look at: the function position and argument position of the application. The expression in the argument position is \lam{b}, which has type \lam{Word}, so the transformation does not apply. The expression in the function position is: @@ -597,7 +599,7 @@ function position (which makes the second condition false). In the same way the transformation does not apply to both components of this expression (\lam{case opcode of Low -> (+); High -> (-)} and \lam{a}), so - we will skip to the components of the case expression: The scrutinee and + we will skip to the components of the case expression: the scrutinee and both alternatives. Since the opcode is not a function, it does not apply here. @@ -608,7 +610,7 @@ We look at the \lam{} pattern, which is \lam{E}. This means we bind \lam{E} to \lam{(+)}. We then replace the expression - with the \lam{}, replacing all occurences of + with the \lam{}, replacing all occurrences of \lam{E} with \lam{(+)}. In the \lam{}, the This gives us the replacement expression: \lam{λx.(+) x} (A lambda expression binding \lam{x}, with a body that applies the addition operator to \lam{x}). @@ -631,7 +633,7 @@ \stoplambda The other alternative is left as an exercise to the reader. The final - function, after applying η-abstraction until it does no longer apply is: + function, after applying η-expansion until it does no longer apply is: \startlambda alu :: Bit -> Word -> Word -> Word @@ -666,27 +668,35 @@ lambda abstractions, let expressions and pattern matches of case alternatives). This is a slightly different notion of global versus local than what \small{GHC} uses internally, but for our purposes - the distinction \GHC makes is not useful. + the distinction \GHC\ makes is not useful. \defref{global variable} \defref{local variable} A \emph{hardware representable} (or just \emph{representable}) type or value is (a value of) a type that we can generate a signal for in hardware. For example, a bit, a vector of bits, a 32 bit unsigned word, etc. Values that are - not runtime representable notably include (but are not limited to): Types, + not run-time representable notably include (but are not limited to): types, dictionaries, functions. \defref{representable} - A \emph{built-in function} is a function supplied by the Cλash framework, whose - implementation is not valid Cλash. The implementation is of course valid - Haskell, for simulation, but it is not expressable in Cλash. - \defref{built-in function} \defref{user-defined function} + A \emph{built-in function} is a function supplied by the Cλash + framework, whose implementation is not used to generate \VHDL. This is + either because it is no valid Cλash (like most list functions that need + recursion) or because a Cλash implementation would be unwanted (for the + addition operator, for example, we would rather use the \VHDL addition + operator to let the synthesis tool decide what kind of adder to use + instead of explicitly describing one in Cλash). \defref{built-in + function} + + These are functions like \lam{map}, \lam{hwor}, \lam{+} and \lam{length}. - For these functions, Cλash has a \emph{built-in hardware translation}, so calls - to these functions can still be translated. These are functions like - \lam{map}, \lam{hwor} and \lam{length}. + For these functions, Cλash has a \emph{built-in hardware translation}, + so calls to these functions can still be translated. Built-in functions + must have a valid Haskell implementation, of course, to allow + simulation. - A \emph{user-defined} function is a function for which we do have a Cλash - implementation available. + A \emph{user-defined} function is a function for which no built-in + translation is available and whose definition will thus need to be + translated to Cλash. \defref{user-defined function} \subsubsection[sec:normalization:predicates]{Predicates} Here, we define a number of predicates that can be used below to concisely @@ -703,7 +713,7 @@ \subsection[sec:normalization:uniq]{Binder uniqueness} A common problem in transformation systems, is binder uniqueness. When not considering this problem, it is easy to create transformations that mix up - bindings and cause name collisions. Take for example, the following core + bindings and cause name collisions. Take for example, the following Core expression: \startlambda @@ -718,8 +728,8 @@ \stoplambda Now, we have replaced the \lam{a} binder with a reference to the \lam{x} - binder. No harm done here. But note that we see multiple occurences of the - \lam{c} binder. The first is a binding occurence, to which the second refers. + binder. No harm done here. But note that we see multiple occurrences of the + \lam{c} binder. The first is a binding occurrence, to which the second refers. The last, however refers to \emph{another} instance of \lam{c}, which is bound somewhere outside of this expression. Now, if we would apply beta reduction without taking heed of binder uniqueness, we would get: @@ -729,12 +739,12 @@ \stoplambda This is obviously not what was supposed to happen! The root of this problem is - the reuse of binders: Identical binders can be bound in different, + the reuse of binders: identical binders can be bound in different, but overlapping scopes. Any variable reference in those overlapping scopes then refers to the variable bound in the inner (smallest) scope. There is not way to refer to the variable in the outer scope. This effect is usually referred to as - \emph{shadowing}: When a binder is bound in a scope where the + \emph{shadowing}: when a binder is bound in a scope where the binder already had a value, the inner binding is said to \emph{shadow} the outer binding. In the example above, the \lam{c} binder was bound outside of the expression and in the inner lambda @@ -742,7 +752,7 @@ can be accessed. There are a number of ways to solve this. \small{GHC} has isolated this - problem to their binder substitution code, which performs \emph{deshadowing} + problem to their binder substitution code, which performs \emph{de-shadowing} during its expression traversal. This means that any binding that shadows another binding on a higher level is replaced by a new binder that does not shadow any other binding. This non-shadowing invariant is enough to prevent @@ -750,13 +760,13 @@ In our transformation system, maintaining this non-shadowing invariant is a bit harder to do (mostly due to implementation issues, the prototype - does not use \small{GHC}'s subsitution code). Also, the following points + does not use \small{GHC}'s substitution code). Also, the following points can be observed. \startitemize - \item Deshadowing does not guarantee overall uniqueness. For example, the + \item De-shadowing does not guarantee overall uniqueness. For example, the following (slightly contrived) expression shows the identifier \lam{x} bound in - two seperate places (and to different values), even though no shadowing + two separate places (and to different values), even though no shadowing occurs. \startlambda @@ -769,10 +779,10 @@ unique. \item When we know that all binders in an expression are unique, moving around - or removing a subexpression will never cause any binder conflicts. If we have - some way to generate fresh binders, introducing new subexpressions will not + or removing a sub-expression will never cause any binder conflicts. If we have + some way to generate fresh binders, introducing new sub-expressions will not cause any problems either. The only way to cause conflicts is thus to - duplicate an existing subexpression. + duplicate an existing sub-expression. \stopitemize Given the above, our prototype maintains a unique binder invariant. This @@ -787,7 +797,6 @@ unique. This is done by generating a fresh binder for every binder used. This also replaces binders that did not cause any conflict, but it does ensure that all binders within the function are generated by the same unique supply. - \refdef{fresh binder} \item Whenever a new binder must be generated, we generate a fresh binder that is guaranteed to be different from \emph{all binders generated so far}. This can thus never introduce duplication and will maintain the invariant. @@ -811,41 +820,41 @@ \in{section}[sec:normalization:transformation]. \subsection{General cleanup} - These transformations are general cleanup transformations, that aim to - make expressions simpler. These transformations usually clean up the - mess left behind by other transformations or clean up expressions to - expose new transformation opportunities for other transformations. - - Most of these transformations are standard optimizations in other - compilers as well. However, in our compiler, most of these are not just - optimizations, but they are required to get our program into intended - normal form. + \placeintermezzo{}{ + \defref{substitution notation} + \startframedtext[width=8cm,background=box,frame=no] + \startalignment[center] + {\tfa Substitution notation} + \stopalignment + \blank[medium] + + In some of the transformations in this chapter, we need to perform + substitution on an expression. Substitution means replacing every + occurrence of some expression (usually a variable reference) with + another expression. + + There have been a lot of different notations used in literature for + specifying substitution. The notation that will be used in this report + is the following: - \placeintermezzo{}{ - \defref{substitution notation} - \startframedtext[width=8cm,background=box,frame=no] - \startalignment[center] - {\tfa Substitution notation} - \stopalignment - \blank[medium] - - In some of the transformations in this chapter, we need to perform - substitution on an expression. Substitution means replacing every - occurence of some expression (usually a variable reference) with - another expression. + \startlambda + E[A=>B] + \stoplambda - There have been a lot of different notations used in literature for - specifying substitution. The notation that will be used in this report - is the following: + This means expression \lam{E} with all occurrences of \lam{A} replaced + with \lam{B}. + \stopframedtext + } - \startlambda - E[A=>B] - \stoplambda + These transformations are general cleanup transformations, that aim to + make expressions simpler. These transformations usually clean up the + mess left behind by other transformations or clean up expressions to + expose new transformation opportunities for other transformations. - This means expression \lam{E} with all occurences of \lam{A} replaced - with \lam{B}. - \stopframedtext - } + Most of these transformations are standard optimizations in other + compilers as well. However, in our compiler, most of these are not just + optimizations, but they are required to get our program into intended + normal form. \subsubsection[sec:normalization:beta]{β-reduction} β-reduction is a well known transformation from lambda calculus, where it is @@ -855,12 +864,12 @@ In our transformation system, this step helps to remove unwanted lambda abstractions (basically all but the ones at the top level). Other transformations (application propagation, non-representable inlining) make - sure that most lambda abstractions will eventually be reducable by + sure that most lambda abstractions will eventually be reducible by β-reduction. Note that β-reduction also works on type lambda abstractions and type applications as well. This means the substitution below also works on - type variables, in the case that the binder is a type variable and teh + type variables, in the case that the binder is a type variable and the expression applied to is a type. \starttrans @@ -901,7 +910,7 @@ desugarer or simplifier emits some bindings that cannot be normalized (e.g., calls to a \hs{Control.Exception.Base.patError}) but are not used anywhere - either. To prevent the \VHDL generation from breaking on these + either. To prevent the \VHDL\ generation from breaking on these artifacts, this transformation removes them. \todo{Do not use old-style numerals in transformations} @@ -943,7 +952,7 @@ \transexample{unusedlet}{Unused let binding removal}{from}{to} \subsubsection{Empty let removal} - This transformation is simple: It removes recursive lets that have no bindings + This transformation is simple: it removes recursive lets that have no bindings (which usually occurs when unused let binding removal removes the last binding from it). @@ -971,7 +980,7 @@ \subsubsection[sec:normalization:simplelet]{Simple let binding removal} This transformation inlines simple let bindings, that bind some - binder to some other binder instead of a more complex expression (\ie + binder to some other binder instead of a more complex expression (\ie\ a = b). This transformation is not needed to get an expression into intended @@ -1033,7 +1042,7 @@ This transform takes simple top level bindings generated by the \small{GHC} compiler. \small{GHC} sometimes generates very simple \quote{wrapper} bindings, which are bound to just a variable - reference, or contain just a (partial) function appliation with + reference, or contain just a (partial) function application with the type and dictionary arguments filled in (such as the \lam{(+)} in the example below). @@ -1069,19 +1078,19 @@ \in{Example}[ex:trans:toplevelinline] shows a typical application of the addition operator generated by \GHC. The type and dictionary arguments used here are described in - \in{Section}[section:prototype:polymorphism]. + \in{Section}[sec:prototype:coretypes]. Without this transformation, there would be a \lam{(+)} entity - in the \VHDL which would just add its inputs. This generates a + in the \VHDL\ which would just add its inputs. This generates a lot of overhead in the \VHDL, which is particularly annoying when browsing the generated RTL schematic (especially since most non-alphanumerics, like all characters in \lam{(+)}, are not - allowed in \VHDL architecture names\footnote{Technically, it is + allowed in \VHDL\ architecture names\footnote{Technically, it is allowed to use non-alphanumerics when using extended identifiers, but it seems that none of the tooling likes - extended identifiers in filenames, so it effectively does not + extended identifiers in file names, so it effectively does not work.}, so the entity would be called \quote{w7aA7f} or - something similarly meaningless and autogenerated). + something similarly meaningless and auto-generated). \subsection{Program structure} These transformations are aimed at normalizing the overall structure @@ -1090,7 +1099,7 @@ of the other value definitions in let bindings and making the final return value a simple variable reference. - \subsubsection[sec:normalization:eta]{η-abstraction} + \subsubsection[sec:normalization:eta]{η-expansion} This transformation makes sure that all arguments of a function-typed expression are named, by introducing lambda expressions. When combined with β-reduction and non-representable binding inlining, all function-typed @@ -1114,7 +1123,7 @@ False -> λy.id y) x \stopbuffer - \transexample{eta}{η-abstraction}{from}{to} + \transexample{eta}{η-expansion}{from}{to} \subsubsection[sec:normalization:appprop]{Application propagation} This transformation is meant to propagate application expressions downwards @@ -1186,7 +1195,7 @@ This transformation makes all non-recursive lets recursive. In the end, we want a single recursive let in our normalized program, so all non-recursive lets can be converted. This also makes other - transformations simpler: They only need to be specified for recursive + transformations simpler: they only need to be specified for recursive let expressions (and simply will not apply to non-recursive let expressions until this transformation has been applied). @@ -1264,45 +1273,54 @@ This transformation ensures that the return value of a function is always a simple local variable reference. - This transformation only applies to the entire body of a - function instead of any subexpression in a function. This is - achieved by the contexts, like \lam{x = E}, though this is - strictly not correct (you could read this as "if there is any - function \lam{x} that binds \lam{E}, any \lam{E} can be - transformed, while we only mean the \lam{E} that is bound by - \lam{x}). - - Note that the return value is not simplified if its not - representable. Otherwise, this would cause a direct loop with - the inlining of unrepresentable bindings. If the return value is - not representable because it has a function type, η-abstraction - should make sure that this transformation will eventually apply. - If the value is not representable for other reasons, the - function result itself is not representable, meaning this - function is not translatable anyway. + The basic idea of this transformation is to take the body of a + function and bind it with a let expression (so the body of that let + expression becomes a variable reference that can be used as the output + port). If the body of the function happens to have lambda abstractions + at the top level (which is allowed by the intended normal + form\refdef{intended normal form definition}), we take the body of the + inner lambda instead. If that happens to be a let expression already + (which is allowed by the intended normal form), we take the body of + that let (which is not allowed to be anything but a variable reference + according the the intended normal form). + + This transformation uses the context conditions in a special way. + These contexts, like \lam{x = λv1 ... λvn.E}, are above the dotted + line and provide a condition on the environment (\ie\ they require a + certain top level binding to be present). These ensure that + expressions are only transformed when they are in the functions + \quote{return value} directly. This means the context conditions have + to interpreted in the right way: not \quote{if there is any function + \lam{x} that binds \lam{E}, any \lam{E} can be transformed}, but we + mean only the \lam{E} that is bound by \lam{x}). + + Be careful when reading the transformations: Not the entire function + from the context is transformed, just a part of it. + + Note that the return value is not simplified if it is not representable. + Otherwise, this would cause a loop with the inlining of + unrepresentable bindings in + \in{section}[sec:normalization:nonrepinline]. If the return value is + not representable because it has a function type, η-expansion should + make sure that this transformation will eventually apply. If the + value is not representable for other reasons, the function result + itself is not representable, meaning this function is not translatable + anyway. \starttrans - x = E \lam{E} is representable - ~ \lam{E} is not a lambda abstraction - E \lam{E} is not a let expression - --------------------------- \lam{E} is not a local variable reference - letrec x = E in x - \stoptrans - - \starttrans - x = λv0 ... λvn.E + x = λv1 ... λvn.E \lam{n} can be zero ~ \lam{E} is representable - E \lam{E} is not a let expression - --------------------------- \lam{E} is not a local variable reference - letrec x = E in x + E \lam{E} is not a lambda abstraction + --------------------------- \lam{E} is not a let expression + letrec y = E in y \lam{E} is not a local variable reference \stoptrans \starttrans - x = λv0 ... λvn.let ... in E - ~ \lam{E} is representable - E \lam{E} is not a local variable reference - ----------------------------- - letrec x = E in x + x = λv1 ... λvn.letrec binds in E \lam{n} can be zero + ~ \lam{E} is representable + letrec binds in E \lam{E} is not a local variable reference + ------------------------------------ + letrec binds; y = E in y \stoptrans \startbuffer[from] @@ -1310,12 +1328,40 @@ \stopbuffer \startbuffer[to] - x = letrec x = add 1 2 in x + x = letrec y = add 1 2 in y \stopbuffer \transexample{retvalsimpl}{Return value simplification}{from}{to} + + \startbuffer[from] + x = λa. add 1 a + \stopbuffer + + \startbuffer[to] + x = λa. letrec + y = add 1 a + in + y + \stopbuffer + + \transexample{retvalsimpllam}{Return value simplification with a lambda abstraction}{from}{to} - \todo{More examples} + \startbuffer[from] + x = letrec + a = add 1 2 + in + add a 3 + \stopbuffer + + \startbuffer[to] + x = letrec + a = add 1 2 + y = add a 3 + in + y + \stopbuffer + + \transexample{retvalsimpllet}{Return value simplification with a let expression}{from}{to} \subsection[sec:normalization:argsimpl]{Representable arguments simplification} This section contains just a single transformation that deals with @@ -1326,7 +1372,7 @@ This transformation ensures that all representable arguments will become references to local variables. This ensures they will become references to local signals in the resulting \small{VHDL}, which is required due to - limitations in the component instantiation code in \VHDL (one can only + limitations in the component instantiation code in \VHDL\ (one can only assign a signal or constant to an input port). By ensuring that all arguments are always simple variable references, we always have a signal available to map to the input ports. @@ -1338,10 +1384,10 @@ \refdef{global variable} Note that references to \emph{global variables} (like a top level - function without arguments, but also an argumentless dataconstructors + function without arguments, but also an argumentless data-constructors like \lam{True}) are also simplified. Only local variables generate signals in the resulting architecture. Even though argumentless - dataconstructors generate constants in generated \VHDL code and could be + data-constructors generate constants in generated \VHDL\ code and could be mapped to an input port directly, they are still simplified to make the normal form more regular. @@ -1378,16 +1424,16 @@ reference. \item (A partial application of) a top level function (either built-in on user-defined). The function extraction transformation described in - this section takes care of turning every functiontyped argument into + this section takes care of turning every function-typed argument into (a partial application of) a top level function. \item Any expression that is not representable and does not have a function type. Since these can be any expression, there is no transformation needed. Note that this category is exactly all expressions that are not transformed by the transformations for the - previous two categories. This means that \emph{any} core expression + previous two categories. This means that \emph{any} Core expression that is used as an argument to a built-in function will be either transformed into one of the above categories, or end up in this - categorie. In any case, the result is in normal form. + category. In any case, the result is in normal form. \stopitemize As noted, the argument simplification will handle any representable @@ -1406,7 +1452,7 @@ translation rules needed for built-in functions, since they only need to handle (partial applications of) top level functions. - Any free variables occuring in the extracted arguments will become + Any free variables occurring in the extracted arguments will become parameters to the new global function. The original argument is replaced with a reference to the new function, applied to any free variables from the original argument. @@ -1418,7 +1464,7 @@ even more complicated expressions). \starttrans - M N \lam{M} is (a partial aplication of) a built-in function. + M N \lam{M} is (a partial application of) a built-in function. --------------------- \lam{f0 ... fn} are all free local variables of \lam{N} M (x f0 ... fn) \lam{N :: a -> b} ~ \lam{N} is not a (partial application of) a top level function @@ -1440,7 +1486,10 @@ Note that the function \lam{f} will still need normalization after this. - \subsection{Case normalisation} + \subsection{Case normalization} + The transformations in this section ensure that case statements end up + in normal form. + \subsubsection{Scrutinee simplification} This transform ensures that the scrutinee of a case expression is always a simple variable reference. @@ -1467,19 +1516,82 @@ False -> b \stopbuffer - \transexample{letflat}{Case normalisation}{from}{to} + \transexample{letflat}{Case normalization}{from}{to} + \placeintermezzo{}{ + \defref{wild binders} + \startframedtext[width=7cm,background=box,frame=no] + \startalignment[center] + {\tfa Wild binders} + \stopalignment + \blank[medium] + In a functional expression, a \emph{wild binder} refers to any + binder that is never referenced. This means that even though it + will be bound to a particular value, that value is never used. + + The Haskell syntax offers the underscore as a wild binder that + cannot even be referenced (It can be seen as introducing a new, + anonymous, binder every time it is used). + + In these transformations, the term wild binder will sometimes be + used to indicate that a binder must not be referenced. + \stopframedtext + } + + \subsubsection{Scrutinee binder removal} + This transformation removes (or rather, makes wild) the binder to + which the scrutinee is bound after evaluation. This is done by + replacing the bndr with the scrutinee in all alternatives. To prevent + duplication of work, this transformation is only applied when the + scrutinee is already a simple variable reference (but the previous + transformation ensures this will eventually be the case). The + scrutinee binder itself is replaced by a wild binder (which is no + longer displayed). + + Note that one could argue that this transformation can change the + meaning of the Core expression. In the regular Core semantics, a case + expression forces the evaluation of its scrutinee and can be used to + implement strict evaluation. However, in the generated \VHDL, + evaluation is always strict. So the semantics we assign to the Core + expression (which differ only at this particular point), this + transformation is completely valid. + + \starttrans + case x of bndr + alts + ----------------- \lam{x} is a local variable reference + case x of + alts[bndr=>x] + \stoptrans + + \startbuffer[from] + case x of y + True -> y + False -> not y + \stopbuffer + + \startbuffer[to] + case x of + True -> x + False -> not x + \stopbuffer + + \transexample{scrutbndrremove}{Scrutinee binder removal}{from}{to} + \subsubsection{Case normalization} This transformation ensures that all case expressions get a form that is allowed by the intended normal form. This means they - will become one of: \refdef{intended normal form definition} + will become one of: + \startitemize \item An extractor case with a single alternative that picks a field - from a datatype, \eg \lam{case x of (a, b) -> a}. + from a datatype, \eg\ \lam{case x of (a, b) -> + a}.\defref{extractor case} \item A selector case with multiple alternatives and only wild binders, that makes a choice between expressions based on the constructor of another - expression, \eg \lam{case x of Low -> a; High -> b}. + expression, \eg\ \lam{case x of Low -> a; High -> + b}.\defref{selector case} \stopitemize For an arbitrary case, that has \lam{n} alternatives, with @@ -1488,7 +1600,7 @@ let bindings for each of the alternatives' value and a single selector case to select the right value out of these. - Technically, the defintion of this transformation would require + Technically, the definition of this transformation would require that the constructor for every alternative has exactly the same amount (\lam{m}) of arguments, but of course this transformation also applies when this is not the case. @@ -1515,7 +1627,6 @@ Cn wn,0 ... wn,m -> yn \stoptrans - \refdef{wild binder} Note that this transformation applies to case expressions with any scrutinee. If the scrutinee is a complex expression, this might result in duplication of work (hardware). An extra condition to @@ -1567,7 +1678,7 @@ \subsubsection[sec:transformation:caseremoval]{Case removal} This transform removes any case expression with a single alternative and - only wild binders.\refdef{wild binder} + only wild binders.\refdef{wild binders} These "useless" case expressions are usually leftovers from case simplification on extractor case (see the previous example). @@ -1595,9 +1706,9 @@ values used in our expression representable. There are two main transformations that are applied to \emph{all} unrepresentable let bindings and function arguments. These are meant to address three - different kinds of unrepresentable values: Polymorphic values, + different kinds of unrepresentable values: polymorphic values, higher-order values and literals. The transformation are described - generically: They apply to all non-representable values. However, + generically: they apply to all non-representable values. However, non-representable values that do not fall into one of these three categories will be moved around by these transformations but are unlikely to completely disappear. They usually mean the program was not @@ -1608,7 +1719,7 @@ actual transformations. \subsubsection{Removing Polymorphism} - As noted in \in{section}[sec:prototype:polymporphism], + As noted in \in{section}[sec:prototype:coretypes], polymorphism is made explicit in Core through type and dictionary arguments. To remove the polymorphism from a function, we can simply specialize the polymorphic function for @@ -1626,9 +1737,9 @@ take care of exactly this. There is one case where polymorphism cannot be completely - removed: Built-in functions are still allowed to be polymorphic + removed: built-in functions are still allowed to be polymorphic (Since we have no function body that we could properly - specialize). However, the code that generates \VHDL for built-in + specialize). However, the code that generates \VHDL\ for built-in functions knows how to handle this, so this is not a problem. \subsubsection[sec:normalization:defunctionalization]{Defunctionalization} @@ -1678,7 +1789,7 @@ \startlambda f 1 2 \stoplambda - the subexpression \lam{f 1} has a function type. But this is + the sub-expression \lam{f 1} has a function type. But this is allowed, since it is inside a complete application. \stopitemize @@ -1697,18 +1808,18 @@ \stoplambda This example shows a number of higher-order values that we cannot - translate to \VHDL directly. The \lam{double} binder bound in the let + translate to \VHDL\ directly. The \lam{double} binder bound in the let expression has a function type, as well as both of the alternatives of the case expression. The first alternative is a partial application of the \lam{map} built-in function, whereas the second alternative is a lambda abstraction. To reduce all higher-order values to one of the above items, a number - of transformations we have already seen are used. The η-abstraction + of transformations we have already seen are used. The η-expansion transformation from \in{section}[sec:normalization:eta] ensures all function arguments are introduced by lambda abstraction on the highest level of a function. These lambda arguments are allowed because of - \in{item}[item:toplambda] above. After η-abstraction, our example + \in{item}[item:toplambda] above. After η-expansion, our example becomes a bit bigger: \startlambda @@ -1719,7 +1830,7 @@ ) q \stoplambda - η-abstraction also introduces extra applications (the application of + η-expansion also introduces extra applications (the application of the let expression to \lam{q} in the above example). These applications can then propagated down by the application propagation transformation (\in{section}[sec:normalization:appprop]). In our @@ -1738,8 +1849,8 @@ representable type). Completely applied top level functions (like the first alternative) are now no longer invalid (they fall under \in{item}[item:completeapp] above). (Completely) applied lambda - abstractions can be removed by β-abstraction. For our example, - applying β-abstraction results in the following: + abstractions can be removed by β-expansion. For our example, + applying β-expansion results in the following: \startlambda λy.λq.let double = λx. x + x in @@ -1807,7 +1918,7 @@ first argument and applies that function twice to the second argument. Again, we have made the function monomorphic for clarity, even though this function would be a lot more useful if it was polymorphic. The - function \lam{main} uses \lam{twice} to apply a lambda epression twice. + function \lam{main} uses \lam{twice} to apply a lambda expression twice. When faced with a user defined function, a body is available for that function. This means we could create a specialized version of the @@ -1841,26 +1952,26 @@ twice). This is discussed in more detail in \in{section}[sec:normalization:duplicatework]. - \subsubsection{Literals} + \subsubsection[sec:normalization:literals]{Literals} There are a limited number of literals available in Haskell and Core. \refdef{enumerated types} When using (enumerating) algebraic - datatypes, a literal is just a reference to the corresponding data + data-types, a literal is just a reference to the corresponding data constructor, which has a representable type (the algebraic datatype) and can be translated directly. This also holds for literals of the \hs{Bool} Haskell type, which is just an enumerated type. There is, however, a second type of literal that does not have a - representable type: Integer literals. Cλash supports using integer + representable type: integer literals. Cλash supports using integer literals for all three integer types supported (\hs{SizedWord}, \hs{SizedInt} and \hs{RangedWord}). This is implemented using Haskell's \hs{Num} type class, which offers a \hs{fromInteger} method - that converts any \hs{Integer} to the Cλash datatypes. + that converts any \hs{Integer} to the Cλash data-types. - When \GHC sees integer literals, it will automatically insert calls to + When \GHC\ sees integer literals, it will automatically insert calls to the \hs{fromInteger} method in the resulting Core expression. For example, the following expression in Haskell creates a 32 bit unsigned word with the value 1. The explicit type signature is needed, since - there is no context for \GHC to determine the type from otherwise. + there is no context for \GHC\ to determine the type from otherwise. \starthaskell 1 :: SizedWord D32 @@ -1899,7 +2010,7 @@ to specialize away any unrepresentable literals that are used as function arguments. The following two transformations do exactly this. - \subsubsection{Non-representable binding inlining} + \subsubsection[sec:normalization:nonrepinline]{Non-representable binding inlining} This transform inlines let bindings that are bound to a non-representable value. Since we can never generate a signal assignment for these bindings (we cannot declare a signal assignment @@ -1948,11 +2059,11 @@ (λb -> add b 1) (add 1 x) \stopbuffer - \transexample{nonrepinline}{Nonrepresentable binding inlining}{from}{to} + \transexample{nonrepinline}{Non-representable binding inlining}{from}{to} \subsubsection[sec:normalization:specialize]{Function specialization} This transform removes arguments to user-defined functions that are - not representable at runtime. This is done by creating a + not representable at run-time. This is done by creating a \emph{specialized} version of the function that only works for one particular value of that argument (in other words, the argument can be removed). @@ -1991,7 +2102,7 @@ exactly one free variable, itself, we would only replace that argument with itself). - This shows that any free local variables that are not runtime + This shows that any free local variables that are not run-time representable cannot be brought into normal form by this transform. We rely on an inlining or β-reduction transformation to replace such a variable with an expression we can propagate again. @@ -2025,7 +2136,7 @@ The argument that we are specializing for, \lam{Y_i}, is put inside the new function body. The old function body is applied to it. Since we use this new function only in place of an application with that - particular argument \lam{Y_i}, behaviour should not change. + particular argument \lam{Y_i}, behavior should not change. Note that the types of the arguments of our new function are taken from the types of the \emph{actual} arguments (\lam{T0 ... Tn}). This @@ -2066,7 +2177,7 @@ A possible solution to this would be to use the following alternative transformation, which is of course no longer normal β-reduction. The - followin transformation has not been tested in the prototype, but is + following transformation has not been tested in the prototype, but is given here for future reference: \starttrans @@ -2094,7 +2205,7 @@ in y + z \stoplambda - Looking at this, we could imagine an alternative approach: Create a + Looking at this, we could imagine an alternative approach: create a transformation that removes let bindings that bind identical values. In the above expression, the \lam{y} and \lam{z} variables could be merged together, resulting in the more efficient expression: @@ -2139,14 +2250,14 @@ For this particular problem, the solutions for duplication of work seem from the previous section seem to fix the determinism of our transformation system as well. However, it is likely that there are - other occurences of this problem. + other occurrences of this problem. \subsection[sec:normalization:castproblems]{Casts} We do not fully understand the use of cast expressions in Core, so there are probably expressions involving cast expressions that cannot be brought into intended normal form by this transformation system. - The uses of casts in the core system should be investigated more and + The uses of casts in the Core system should be investigated more and transformations will probably need updating to handle them in all cases. @@ -2155,16 +2266,16 @@ normal form definition} offers enough freedom to describe all valid stateful descriptions, but is not limiting enough. It is possible to write descriptions which are in intended normal - form, but cannot be translated into \VHDL in a meaningful way + form, but cannot be translated into \VHDL\ in a meaningful way (\eg, a function that swaps two substates in its result, or a - function that changes a substate itself instead of passing it to - a subfunction). + function that changes a sub-state itself instead of passing it to + a sub-function). It is now up to the programmer to not do anything funny with these state values, whereas the normalization just tries not to mess up the flow of state values. In practice, there are situations where a Core program that \emph{could} be a valid - stateful description is not translateable by the prototype. This + stateful description is not translatable by the prototype. This most often happens when statefulness is mixed with pattern matching, causing a state input to be unpacked multiple times or be unpacked and repacked only in some of the code paths. @@ -2172,7 +2283,7 @@ Without going into detail about the exact problems (of which there are probably more than have shown up so far), it seems unlikely that these problems can be solved entirely by just - improving the \VHDL state generation in the final stage. The + improving the \VHDL\ state generation in the final stage. The normalization stage seems the best place to apply the rewriting needed to support more complex stateful descriptions. This does of course mean that the intended normal form definition must be @@ -2186,7 +2297,7 @@ When looking at the system of transformations outlined above, there are a number of questions that we can ask ourselves. The main question is of course: \quote{Does our system work as intended?}. We can split this question into a - number of subquestions: + number of sub-questions: \startitemize[KR] \item[q:termination] Does our system \emph{terminate}? Since our system will @@ -2197,12 +2308,12 @@ expanding some expression. \item[q:soundness] Is our system \emph{sound}? Since our transformations continuously modify the expression, there is an obvious risk that the final - normal form will not be equivalent to the original program: Its meaning could + normal form will not be equivalent to the original program: its meaning could have changed. \item[q:completeness] Is our system \emph{complete}? Since we have a complex system of transformations, there is an obvious risk that some expressions will not end up in our intended normal form, because we forgot some transformation. - In other words: Does our transformation system result in our intended normal + In other words: does our transformation system result in our intended normal form for all possible inputs? \item[q:determinism] Is our system \emph{deterministic}? Since we have defined no particular order in which the transformation should be applied, there is an @@ -2210,18 +2321,20 @@ \emph{different} normal forms. They might still both be intended normal forms (if our system is \emph{complete}) and describe correct hardware (if our system is \emph{sound}), so this property is less important than the previous - three: The translator would still function properly without it. + three: the translator would still function properly without it. \stopitemize Unfortunately, the final transformation system has only been developed in the final part of the research, leaving no more time for verifying these properties. In fact, it is likely that the current transformation system still violates some of these - properties in some cases and should be improved (or extra conditions - on the input hardware descriptions should be formulated). + properties in some cases (see + \in{section}[sec:normalization:non-determinism] and + \in{section}[sec:normalization:stateproblems]) and should be improved (or + extra conditions on the input hardware descriptions should be formulated). This is most likely the case with the completeness and determinism - properties, perhaps als the termination property. The soundness + properties, perhaps also the termination property. The soundness property probably holds, since it is easier to manually verify (each transformation can be reviewed separately). @@ -2298,12 +2411,11 @@ \todo{Define β-reduction and η-reduction?} - Note that the normal form of such a system consists of the set of nodes - (expressions) without outgoing edges, since those are the expressions to which - no transformation applies anymore. We call this set of nodes the \emph{normal - set}. The set of nodes containing expressions in intended normal - form \refdef{intended normal form} is called the \emph{intended - normal set}. + In such a graph a node (expression) is in normal form if it has no + outgoing edges (meaning no transformation applies to it). The set of + nodes without outgoing edges is called the \emph{normal set}. Similarly, + the set of nodes containing expressions in intended normal form + \refdef{intended normal form definition} is called the \emph{intended normal set}. From such a graph, we can derive some properties easily: \startitemize[KR] @@ -2377,7 +2489,7 @@ will of course leave the meaning unchanged and is thus \emph{sound}. - The current prototype has only been verified in an ad-hoc fashion + The current prototype has only been verified in an ad hoc fashion by inspecting (the code for) each transformation. A more formal verification would be more appropriate. @@ -2389,19 +2501,19 @@ sufficient for our goals (but it is a good start). It should be possible to have a single formal definition of - meaning for Core for both normal Core compilation by \GHC and for + meaning for Core for both normal Core compilation by \GHC\ and for our compilation to \VHDL. The main difference seems to be that in hardware every expression is always evaluated, while in software it is only evaluated if needed, but it should be possible to - assign a meaning to core expressions that assumes neither. + assign a meaning to Core expressions that assumes neither. Since each of the transformations can be applied to any - subexpression as well, there is a constraint on our meaning - definition: The meaning of an expression should depend only on the - meaning of subexpressions, not on the expressions themselves. For + sub-expression as well, there is a constraint on our meaning + definition: the meaning of an expression should depend only on the + meaning of sub-expressions, not on the expressions themselves. For example, the meaning of the application in \lam{f (let x = 4 in x)} should be the same as the meaning of the application in \lam{f - 4}, since the argument subexpression has the same meaning (though + 4}, since the argument sub-expression has the same meaning (though the actual expression is different). \subsection{Completeness} @@ -2418,11 +2530,11 @@ each node in the normal set is also in the intended normal set. Reasoning about our intended normal set is easier, since we know how to generate it from its definition. \refdef{intended normal - form definition}. + form definition} Fortunately, we can also prove the complement (which is equivalent, since $A \subseteq B \Leftrightarrow \overline{B} - \subseteq \overline{A}$): Show that the set of nodes not in + \subseteq \overline{A}$): show that the set of nodes not in intended normal form is a subset of the set of nodes not in normal form. In other words, show that for every expression that is not in intended normal form, that there is at least one transformation @@ -2433,12 +2545,12 @@ By systematically reviewing the entire Core language definition along with the intended normal form definition (both of which have a similar structure), it should be possible to identify all - possible (sets of) core expressions that are not in intended + possible (sets of) Core expressions that are not in intended normal form and identify a transformation that applies to it. This approach is especially useful for proving completeness of our system, since if expressions exist to which none of the - transformations apply (\ie if the system is not yet complete), it + transformations apply (\ie\ if the system is not yet complete), it is immediately clear which expressions these are and adding (or modifying) transformations to fix this should be relatively easy. @@ -2447,5 +2559,24 @@ we need to check every (set of) transformation(s) separately. \todo{Perhaps do a few steps of the proofs as proof-of-concept} + \subsection{Determinism} + A well-known technique for proving determinism in lambda calculus + and other reduction systems, is using the Church-Rosser property + \cite[church36]. A reduction system has the CR property if and only if: + + \placedefinition[here]{Church-Rosser theorem} + {\lam{\forall A, B, C \exists D (A ->> B ∧ A ->> C => B ->> D ∧ C ->> D)}} + + Here, \lam{A ->> B} means \lam{A} \emph{reduces to} \lam{B}. In + other words, there is a set of transformations that can transform + \lam{A} to \lam{B}. \lam{=>} is used to mean \emph{implies}. + + For a transformation system holding the Church-Rosser property, it + is easy to show that it is in fact deterministic. Showing that this + property actually holds is a harder problem, but has been + done for some reduction systems in the lambda calculus + \cite[klop80]\ \cite[barendregt84]. Doing the same for our + transformation system is probably more complicated, but not + impossible. % vim: set sw=2 sts=2 expandtab: