+ \starttrans
+ x = E
+ ~
+ x Y0 ... Yi ... Yn \lam{Yi} is not representable
+ --------------------------------------------- \lam{Yi} is not a local variable reference
+ x' Y0 ... Yi-1 f0 ... fm Yi+1 ... Yn \lam{f0 ... fm} are all free local vars of \lam{Yi}
+ ~ \lam{T0 ... Tn} are the types of \lam{Y0 ... Yn}
+ x' = λ(y0 :: T0) ... λ(yi-1 :: Ty-1).
+ λf0 ... λfm.
+ λ(yi+1 :: Ty+1) ... λ(yn :: Tn).
+ E y0 ... yi-1 Yi yi+1 ... yn
+ \stoptrans
+
+ This is a bit of a complex transformation. It transforms an
+ application of the function \lam{x}, where one of the arguments
+ (\lam{Y_i}) is not representable. A new
+ function \lam{x'} is created that wraps the body of the old function.
+ The body of the new function becomes a number of nested lambda
+ abstractions, one for each of the original arguments that are left
+ unchanged.
+
+ The ith argument is replaced with the free variables of
+ \lam{Y_i}. Note that we reuse the same binders as those used in
+ \lam{Y_i}, since we can then just use \lam{Y_i} inside the new
+ function body and have all of the variables it uses be in scope.
+
+ The argument that we are specializing for, \lam{Y_i}, is put inside
+ the new function body. The old function body is applied to it. Since
+ we use this new function only in place of an application with that
+ particular argument \lam{Y_i}, behavior should not change.
+
+ Note that the types of the arguments of our new function are taken
+ from the types of the \emph{actual} arguments (\lam{T0 ... Tn}). This
+ means that any polymorphism in the arguments is removed, even when the
+ corresponding explicit type lambda is not removed
+ yet.
+
+ \todo{Examples. Perhaps reference the previous sections}
+
+ \section{Unsolved problems}
+ The above system of transformations has been implemented in the prototype
+ and seems to work well to compile simple and more complex examples of
+ hardware descriptions. \todo{Ref christiaan?} However, this normalization
+ system has not seen enough review and work to be complete and work for
+ every Core expression that is supplied to it. A number of problems
+ have already been identified and are discussed in this section.
+
+ \subsection[sec:normalization:duplicatework]{Work duplication}
+ A possible problem of β-reduction is that it could duplicate work.
+ When the expression applied is not a simple variable reference, but
+ requires calculation and the binder the lambda abstraction binds to
+ is used more than once, more hardware might be generated than strictly
+ needed.
+
+ As an example, consider the expression:
+
+ \startlambda
+ (λx. x + x) (a * b)
+ \stoplambda
+
+ When applying β-reduction to this expression, we get:
+
+ \startlambda
+ (a * b) + (a * b)
+ \stoplambda
+
+ which of course calculates \lam{(a * b)} twice.
+
+ A possible solution to this would be to use the following alternative
+ transformation, which is of course no longer normal β-reduction. The
+ following transformation has not been tested in the prototype, but is
+ given here for future reference:
+
+ \starttrans
+ (λx.E) M
+ -----------------
+ letrec x = M in E
+ \stoptrans
+
+ This does not seem like much of an improvement, but it does get rid of
+ the lambda expression (and the associated higher-order value), while
+ at the same time introducing a new let binding. Since the result of
+ every application or case expression must be bound by a let expression
+ in the intended normal form anyway, this is probably not a problem. If
+ the argument happens to be a variable reference, then simple let
+ binding removal (\in{section}[sec:normalization:simplelet]) will
+ remove it, making the result identical to that of the original
+ β-reduction transformation.
+
+ When also applying argument simplification to the above example, we
+ get the following expression:
+
+ \startlambda
+ let y = (a * b)
+ z = (a * b)
+ in y + z
+ \stoplambda
+
+ Looking at this, we could imagine an alternative approach: create a
+ transformation that removes let bindings that bind identical values.
+ In the above expression, the \lam{y} and \lam{z} variables could be
+ merged together, resulting in the more efficient expression:
+
+ \startlambda
+ let y = (a * b) in y + y
+ \stoplambda
+
+ \subsection[sec:normalization:non-determinism]{Non-determinism}
+ As an example, again consider the following expression:
+
+ \startlambda
+ (λx. x + x) (a * b)
+ \stoplambda
+
+ We can apply both β-reduction (\in{section}[sec:normalization:beta])
+ as well as argument simplification
+ (\in{section}[sec:normalization:argsimpl]) to this expression.
+
+ When applying argument simplification first and then β-reduction, we
+ get the following expression:
+
+ \startlambda
+ let y = (a * b) in y + y
+ \stoplambda
+
+ When applying β-reduction first and then argument simplification, we
+ get the following expression:
+
+ \startlambda
+ let y = (a * b)
+ z = (a * b)
+ in y + z
+ \stoplambda
+
+ As you can see, this is a different expression. This means that the
+ order of expressions, does in fact change the resulting normal form,
+ which is something that we would like to avoid. In this particular
+ case one of the alternatives is even clearly more efficient, so we
+ would of course like the more efficient form to be the normal form.
+
+ For this particular problem, the solutions for duplication of work
+ seem from the previous section seem to fix the determinism of our
+ transformation system as well. However, it is likely that there are
+ other occurrences of this problem.
+
+ \subsection[sec:normalization:castproblems]{Casts}
+ We do not fully understand the use of cast expressions in Core, so
+ there are probably expressions involving cast expressions that cannot
+ be brought into intended normal form by this transformation system.
+
+ The uses of casts in the Core system should be investigated more and
+ transformations will probably need updating to handle them in all
+ cases.
+
+ \subsection[sec:normalization:stateproblems]{Normalization of stateful descriptions}
+ Currently, the intended normal form definition\refdef{intended
+ normal form definition} offers enough freedom to describe all
+ valid stateful descriptions, but is not limiting enough. It is
+ possible to write descriptions which are in intended normal
+ form, but cannot be translated into \VHDL\ in a meaningful way
+ (\eg, a function that swaps two substates in its result, or a
+ function that changes a sub-state itself instead of passing it to
+ a sub-function).
+
+ It is now up to the programmer to not do anything funny with
+ these state values, whereas the normalization just tries not to
+ mess up the flow of state values. In practice, there are
+ situations where a Core program that \emph{could} be a valid
+ stateful description is not translatable by the prototype. This
+ most often happens when statefulness is mixed with pattern
+ matching, causing a state input to be unpacked multiple times or
+ be unpacked and repacked only in some of the code paths.
+
+ Without going into detail about the exact problems (of which
+ there are probably more than have shown up so far), it seems
+ unlikely that these problems can be solved entirely by just
+ improving the \VHDL\ state generation in the final stage. The
+ normalization stage seems the best place to apply the rewriting
+ needed to support more complex stateful descriptions. This does
+ of course mean that the intended normal form definition must be
+ extended as well to be more specific about how state handling
+ should look like in normal form.
+ \in{Section}[sec:prototype:statelimits] already contains a
+ tight description of the limitations on the use of state
+ variables, which could be adapted into the intended normal form.
+
+ \section[sec:normalization:properties]{Provable properties}