--- /dev/null
+@SysInclude { report }
+@SysInclude { tbl }
+@SysInclude { haskell }
+def @SectionLink right sec {sec @CrossLink {section @NumberOf sec}}
+
+@Report
+ @InitialSpace {tex} # Treat multiple spaces as one
+ @Title {From Haskell to VHDL}
+ @Author {Matthijs Kooijman}
+ @Institution {University of Twente}
+ @DateLine {Yes}
+ @CoverSheet {No}
+# @OptimizePages {Yes}
+//
+@Section
+ @Title {Introduction}
+@Begin
+ @LP The aim of this project is to design a structural hardware
+ description language, embedded in the functional language Haskell.
+ This means that a program written in (a subset of) Haskell will be
+ translated into hardware, using a VHDL description. The original
+ Haskell program is a structural description, meaning that the
+ programmer explicitly specifies hardware and area vs time trade-offs
+ are already made explicit in the description and the translator is not
+ required to do any scheduling.
+
+ @LP The compilation process is divided roughly into four steps:
+
+ @LeftList
+ @ParagraphItem { Parsing, typechecking and simplifying the Haskell
+ source(s). By using the API that GHC makes available, this step is
+ fairly easy. We can feed a Haskell source file into GHC, which then
+ performs all the neccesary steps to create a @I Core module, which
+ is a simplified representation of the Haskell source file.}
+
+ @ParagraphItem { Flattening the function that is the top of the design, and
+ (recursively) any functions it requires. Flattening is the process
+ of removing nesting from expressions in the functions and making the
+ routes values take through an expression more explicit. This results
+ in a flat list of flat functions.}
+
+ @ParagraphItem { Processing the flattened functions. This step does
+ transformations such as deciding which function arguments will be
+ used as state values, decide names for signals, etc.}
+
+ @ParagraphItem { Translate the flat functions generated so far into VHDL.
+ This is a fairly direct translation, all information should be
+ present by now.}
+ @EndList
+
+ @LP In this document we will focus on the second step. The first step
+ is completely handled by GHC and results in a Core module. The
+ representation GHC uses for the Core language is the subject of
+ @SectionLink {Core}. The second step translates
+ a Core module to a set of flat functions. The representation of a flat
+ function is the subject of @SectionLink {FlatFunctions}. The fourth
+ step is also partly discussed in this section. Finally, the actual
+ flattening that happens in the second step is the subject of
+ @SectionLink {Flattening}. The third step remains undiscussed for now.
+@End @Section
+
+@Section
+ @Title { The Core language }
+ @Tag {Core}
+@Begin
+ @LP GHC uses an internal representation for Haskell programs called @I
+ {Core} . Core is essentially a small programming language, but we
+ focus on the representation GHC uses. Inside GHC a Core program is
+ stored using a number of custom data types, which we will describe
+ here.
+
+ @LP The descriptions given here are therefore very similar to Haskell
+ source, and can usually be interpreted as such. To simplify this
+ discussion, some details that are irrelevant from our point of view
+ are left out, but each of these simplifications is noted in the text.
+
+ @LP There are two main aspects of Core: Its typing system and its
+ bindings and expression structure.
+
+ @LP So far, I've focussed on the expression signature without looking closely at
+ the typing structure, except where needed. Thus, I will describe the
+ expression structure a bit here and provide examples to show the relation with
+ Haskell source programs.
+
+@BeginSubSections
+@SubSection
+ @Title { Core bindings }
+@Begin
+@LP A central role in the core language is played by the @I binding. A binding binds
+one or more expressions to one or more identifiers (@I binders). A core program is
+essentially a set of bindings, each of which bind to an identifier on global
+scope (i.e., you get one binding for each function that is defined).
+
+The @I CoreBind type is defined as follows:
+
+@ID @Haskell {
+ NonRec CoreBndr CoreExpr
+ | Rec [(CoreBndr, CoreExpr)]
+}
+
+@LP The @I NonRec constructor is the simplest: It binds a single expression to a
+single binder. The @I Rec constructors takes a list of binder, expression
+pairs, and binds each expression to the corresponding binder. These bindings
+happen on the same scope level, meaning that each expression can use any of
+the (other) bindings, creating mutually recursive references.
+
+@LP When a program is initially compiled to Core, it consists of a single @I Rec
+binding, containing all the bindings in the original haskell source. These
+bindings are allowed to reference each other, possibly recursively, but might
+not. One of the Core simplification passes splits out this large @I Rec
+constructor into as much small @I Recs and @I NonRecs as possible.
+
+@LP The @I CoreBnder type is used in @I Let expressions as well, see below.
+@End @SubSection
+
+@SubSection
+ @Title { Core expressions }
+@Begin
+@LP The main expression type is @I CoreExpr, @FootNote {In reality, @I CoreExpr
+is an alias for @I {Expr CoreBndr}, but different types of binders are only
+used by GHC internally.} which has the following
+constructors @FootNote{The @I Note constructor was omitted, since it only
+serves as annotation and does not influence semantics of the expression.}.
+
+@DP
+@Tbl
+ aformat { @Cell @I A | @Cell B }
+{
+ @Rowa
+ A {Var Id}
+ B {A variable reference}
+ @Rowa
+ A {Lit Literal}
+ B {A literal value}
+ @Rowa
+ A {App CoreExpr CoreExpr}
+ B {An application. The first expression is the function or data
+ constructor to apply, the second expression is the argument to apply
+ the function to. Function application always applies a single
+ argument, for functions with multiple arguments application is applied
+ recursively.}
+ @Rowa
+ A {Lam CoreBndr CoreExpr}
+ B {A lambda expression. When this expression is applied to some argument,
+ the argument is bound to the binder given and the result is the
+ given expression. Or, slightly more formal, the result is the given
+ expression with all @I Var expressions corresponding to the binder
+ replaced with the argument the entire lambda is applied to.}
+ @Rowa
+ A {Let CoreBind CoreExpr}
+ B {
+ A let expression. The given bind is one or more binder, expression
+ pairs. Each expression is evaluated and bound to the corresponding
+ binder. The result is of the entire let expression is then the given
+ expression.
+
+ @DP Or, slightly more formal, evaluates all expressions inside the given bind and
+ returns the given expression with all @I Var expressions corresponding to
+ the binders in the given bind replaced by their respective evaluated
+ expressions.
+ }
+ @Rowa
+ A {
+ Case CoreExpr [CoreAlt]
+ @FootNote {In reality, there are also another Type and CoreBndr
+ arguments. However, the Type argument only contains the type of the
+ entire expression, and thus is redundant and not useful for our purpose.
+ The CoreBndr is a binder to which the scrutinee is bound, but seems
+ this is only used for forcing evaluation of a term. Therefore, these two
+ are left out in this discussion.}
+ }
+ B {
+ A case expression, which can be used to do conditional evaluation and
+ data type ``unpacking''. The given expression (``scrutinee'') is
+ evaluated, bound to the given binder and depending on its value on of
+ the CoreAlts is chosen whose expression is returned.
+
+ @DP A @I CoreAlt is a three tuple describing the alternative:
+
+ @IndentedDisplay @I {(AltCon, [CoreBndr], CoreExpr)}
+
+ @DP Here, the @I AltCon is an alternative constructor which can matched
+ to the scrutinee. An alternative constructor can be one of the default
+ alternative (which matches any value not matched by other alternative
+ constructors in the list), a literal (which matches if the scrutinee
+ evaluates to a literal or a data constructor, which matches if the value
+ is constructed using that specific data constructor.
+
+ @DP In the last case, when a data constructor is used for the
+ alternative, the arguments to the data constructor are extracted from
+ the scrutinee and bound to the corresponding binders (from the second
+ element of the @I CoreAlt tuple).
+
+ @DP The last element from the tuple is the expression to return, with
+ the extra bindings in effect.
+ }
+ @Rowa
+ A {
+ Cast CoreExpr Type
+ }
+ B {
+ Cast the given expression to the given type. This plays an import role
+ when working with @F newtype declarations and type classes. The exact
+ workings of this expression are a bit shady, but for now we can get away
+ with ignore it.
+ }
+ @Rowa
+ A {
+ Type Type
+ }
+ B {
+ This expression turns a type into an expression, so it can be applied to
+ functions to implement polymorphic functions and type classes. It cannot
+ be used anywhere else.
+
+ Again, we can can get away with ignore this expression for now.
+ }
+}
+@DP
+@BeginSubSubSections
+ @SubSubSection
+ @Title {Examples}
+ @Tag {CoreExamples}
+ @Begin
+
+ @LP To get a clearer example of how Haskell programs translate to the Core
+ language, a number of programs are given in both Haskell syntax and the
+ equivalent core representation. These are all simple examples, focused on
+ showing a particular aspect of the Core language.
+
+ @LP In these examples, we will use the following data type. Since most of the
+ primitive datatypes and operators are involved in some kind of
+ polymorphism (Even the + operator is tricky), those are less useful for
+ simple examples. Using the following @I Bit datatype will also make these
+ examples more appropriate for our goals.
+
+ @ID @Haskell {
+ data Bit = High | Low
+ }
+
+ @Display @Heading {Fixed value}
+ @LP This is a trivial example which shows a simple data constructor without
+ arguments bound to the binders ``high'' and ``low''. Note that in
+ reality, the binder and @I Id values in @I Var nodes include some extra
+ info about the module it is defined in and what kind of identifier it
+ is, but that is left out here for simplicity.
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {
+high = High
+low = Low}}
+
+ @LP This haskell source results in the following list of @I
+ CoreBndrs.
+
+ @CNP @ID @Example
+ @Title {Core Structure}
+ {
+ @LP @Haskell {
+[ NonRec ``high'' (Var High)
+, NonRec ``low'; (Var Low)]}}
+
+ @LP In subsequent examples I will leave out the @I NonRec constructor and
+ binder name and just show the resulting @I CoreExpr.
+
+ @Display @Heading {Function arguments}
+ This example shows a simple function that simply returns its argument.
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {
+wire :: Bit -> Bit
+wire a = a}}
+
+ @CNP @ID @Example
+ @Title {Core Structure}
+ {
+ @LP @Haskell {
+Lam a (Var a)}}
+
+ @Display @Heading { Pattern matching }
+ This example shows a function with a single argument which uses
+ pattern matching, which results in a simple case statement.
+
+ @CNP @ID @Example
+ @Title {Haskell Code}
+ {
+ @LP @Haskell {
+inv :: Bit -> Bit
+inv High = Low
+inv Low = High}}
+
+ @ID @Example
+ @Title {Core Structure}
+ {
+ @LP @Haskell {
+Lam ds (Case
+ (Var ds)
+ [
+ (High, [], (Var Low)),
+ (Low, [], (Var High))
+ ]
+)}}
+
+ @LP Note that in the tuples describing the alternatives, the first element
+ is a data constructor directly, while the third argument is an expression
+ referring to a dataconstructor (henc the Var constructor).
+
+ @Display @Heading {Function application}
+ This example shows a function that applies other functions (with one and
+ two arguments) to its own arguments to do its work. It also illustrates
+ functions accepting multiple arguments.
+
+ @LP For this example we assume there is a function @I nand available to
+ returns the negated and of its two arguments.
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {
+or :: Bit -> Bit -> Bit
+or a b = inv (nand a b)}}
+
+ @CNP @ID @Example
+ @Title {Core structure}
+ {
+ @LP @Haskell {
+Lam a (Lam b (App
+ (Var inv)
+ (App
+ (App
+ (Var nand)
+ (Var a))
+ (Var b)
+ )
+))}}
+
+ @Display @Heading {Tuples}
+ This example shows a function that accepts a tuple of two values and
+ returns a tuple of two values again.
+
+ @LP This example assumes the availability of two functions @I and and
+ @I xor, with obvious meaning.
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {
+-- Accepts two values and returns the sum and carry out respectively.
+half_adder :: (Bit, Bit) -> (Bit, Bit)
+half_adder (a, b) =
+ (and a b, xor a b)}}
+
+ @LP Since the core structure corresponding to this (still pretty trivial)
+ function will be quite complex already, we will first show the desugared and
+ simplified version of the function, to which the core structure directly
+ corresponds.
+
+ @CNP @ID @Example
+ @Title {Simplified haskell code}
+ {
+ @LP @Haskell {
+ half_adder = \ds ->
+ Case ds of
+ (a, b) ->
+ (,) (and a b) (xor a b)
+}}
+ @CNP @ID @Example
+ @Title {Core structure}
+ {
+ @LP @Haskell
+ {
+Lam ds (Case (Var ds)
+ [(
+ (,),
+ [a, b],
+ (App
+ (App
+ (Var (,))
+ (App
+ (App
+ (Var and)
+ (Var a)
+ )
+ (Var b)
+ )
+ )
+ (App
+ (App
+ (Var xor)
+ (Var a)
+ )
+ (Var b)
+ )
+ )
+ )]
+)}}
+
+
+ @LP Note the (,) in this example, which is the data constructor for a
+ two-tuple. It can be passed two values and will construct a tuple from
+ them. @FootNote {In reality, this dataconstructor takes @I four arguments,
+ namely the types of each of the elements and then the elements themselves.
+ This is because the two-tuple type really takes type arguments, which are
+ made explicit on every call to its data constructors.}
+
+ @Display @Heading {Let expressions}
+ This example shows a function that uses a let expression to use a value
+ twice.
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {
+f :: Bit -> (Bit, Bit)
+f a =
+ (x, and a x)
+ where
+ x = xor a a}}
+
+ @CNP @ID @Example
+ @Title {Simplified haskell code}
+ {
+ @LP @Haskell {
+ f = \a ->
+ let x = xor a a in
+ (,) x (and a x)
+}}
+ @CNP @ID @Example
+ @Title {Core structure}
+ {
+ @LP @Haskell
+ {
+Lam a (Let
+ [(NonRec
+ x
+ (App
+ (App
+ (Var xor)
+ (Var a)
+ )
+ (Var a)
+ )
+ )]
+ (App
+ (App
+ (Var (,))
+ (Var x)
+ )
+ (App
+ (App
+ (Var and)
+ (Var a)
+ )
+ (Var x)
+ )
+ )
+)}}
+
+ @Display @Heading {Pattern matching}
+ This example shows a multiplexer, i.e., a function that selects one
+ of two inputs, based on a third input. This illustrates how pattern
+ matching is translated into core and can be used to create
+ conditional signal connections.
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell
+ {
+mux2 :: Bit -> (Bit, Bit) -> Bit
+mux2 Low (a, b) = a
+mux2 High (a, b) = b}}
+
+ @CNP @ID @Example
+ @Title {Simplified haskell code}
+ {
+ @LP @Haskell {
+ mux2 = \ds ds1 -> case ds of
+ Low -> case ds1 of (a, b) -> a
+ High -> case ds1 of (a, b) -> b}}
+
+ @CNP @ID @Example
+ @Title {Core structure}
+ {
+ @LP @Haskell
+ {
+Lam ds (Lam ds1 (Case (Var ds)
+ [(
+ Low,
+ [],
+ (Case (Var ds1)
+ [(
+ (,)
+ [a, b],
+ (Var a)
+ )]
+ )
+ ),(
+ High,
+ [],
+ (Case (Var ds1)
+ [(
+ (,)
+ [a, b],
+ (Var b)
+ )]
+ )
+ )]
+))}}
+ @End @SubSubSection
+@EndSubSubSections
+@End @SubSection
+@EndSubSections
+@End @Section
+
+@Section
+ @Title {Flat Functions}
+ @Tag {FlatFunctions}
+@Begin
+ @LP As an intermediate between Core and VHDL, we use "flat" functions (the name
+ is open for suggestions). A flat function is a simplified version of the
+ function, which defines a number of signals (corresponding to single haskell
+ values) which can be used in a number of places and are defined to a single
+ value. A single value in this case means something that will probably result
+ in a single VHDL signal or port, so a 2-tuple results in two signals, for
+ example.
+
+ @LP The function is flat, because there are no directly nested expressions. A
+ function application can only use signals, not nested applications or other
+ expressions. These signals can of course be defined as the result of an
+ expression or function application, so the original expressions can still be
+ expressed.
+
+ @LP A flat function is defined as a single @I FlatFunction constructor with
+ the following fields @FootNote {The implementation additionally has a @I
+ sigs fields listing all the signals, but since that list is directly
+ deducable from the @I args and @I defs fields, it is left out here}:
+
+ @DP
+ @Tbl
+ aformat { @Cell @I A | @Cell B }
+ {
+ @Rowa
+ A {args :: [SignalMap]}
+ B {A list that maps each argument to this function to a (number of)
+ signal(s). In other words, this field specifies to which signals the
+ function arguments should be assigned when this function is called.}
+ @Rowa
+ A {res :: SignalMap}
+ B {A map that maps the return value to a (number of) signal(s).
+ signal(s). In other words, this field specifies which signals contain
+ the function result when this function is called.}
+ @Rowa
+ A {defs :: [SignalDef]}
+ B {Definitions for each of the signals in the function. Each @I SignalDef
+ defines one ore more signals, and each signal should be defined exactly
+ once. Note that the signals in @I args don't have a @I SignalDef entry.}
+ }
+
+ @LP The type @I Signal plays a central role in a flat function. It
+ identifies a signal. It's exact representation is not that important (the
+ current implementation uses an Int), as long as it can be compared and
+ unique new @I Signals can be generated.
+
+ @LP The @I SignalMap type has the following two constructors:
+
+ @ID @Haskell {
+Single Signal
+| Tuple [SignalMap]
+ }
+
+ @LP This means we can support any type that can be translated to a signal
+ directly (currently only @I Bit and @I Bool) and tuples containing those
+ types. A signal map allows us to decompose a Haskell value into individual
+ pieces that can be used in the resulting VHDL.
+
+ @LP The @I SignalDef type is slightly more complex. For one or more signals,
+ it specifies how they are defined. In the resulting hardware, these can be
+ mapped to simple signal assignments or port mappings on component
+ instantiations. The type has the following constructors:
+
+ @DP @Tbl
+ aformat { @Cell @I A | @Cell B }
+ {
+ @Rowa
+ A {lines @Break {
+FApp
+ func :: HsFunction @FootNote {In the current implementation, this
+ value can also be a representation of a higher order function.
+ However, since this is not functional yet, it is left out of this
+ discussion.}
+ args :: [SignalMap]
+ res :: SignalMap
+ }}
+ B {A function application. The function that is called is identified by
+ the given @I HsFunction, and passed the values in the given
+ @I SignalMaps as arguments. The result of the function will be
+ assigned to the @I SignalMap in @I res.}
+ @Rowa
+ A {lines @Break {
+CondDef
+ cond :: Signal
+ true :: Signal
+ false :: Signal
+ res :: Signal
+ }}
+ B {A conditional signal definition. If the signal pointed to by @I cond
+ is True, @I true is assigned to @I res, else @I false is assigned to
+ @I res. The @I cond signal must thus be of type @I Bool.
+
+ This keeps conditional assignment very simple, all complexity in the
+ conditions will be put into @I SignalExpr in @I UncondDef. }
+ @Rowa
+ A {lines @Break {
+UncondDef
+ src :: SignalExpr
+ dst :: Signal}}
+ B {Unconditional signal definition. In the most basic form (assigning a
+ simple signal to another signal) this is not often useful (only in
+ specific cases where two signals cannot be merged, when assigning an
+ output port to an input port for example).
+
+ The more interesting use for unconditional definition is when the
+ @I SignalExpr is more complicated and allows for more complexity. }
+ }
+
+ @LD @Heading {Identifying functions}
+ @LP To identify a function to call we use the @I HsFunction type. This
+ type is used for two purposes. First, it contains the name of the
+ function to call, so the corresponding @I CoreBndr and @I CoreExpr
+ can be found in the current module. Secondly, an @I HsFunction
+ contains information on the use of each argument and the return
+ value. In particular, it stores for each (part of an) argument or
+ return value if it should become an input or output Port, or a state
+ internal to the called function.
+
+ @LP The @I HsFunction type looks as follows:
+
+ @ID @Haskell {
+HsFunction {
+ func :: String,
+ args :: [UseMap],
+ res :: UseMap
+}}
+ @LP Here, the UseMap type is defined similar to the @I SignalMap
+ type @FootNote{In reality, both @I SignalMap and @I UseMap are
+ generalized into a type @I HsValueMap which takes an element type as
+ a type parameter}:
+
+ @ID @Haskell {
+Single Use
+| Tuple [Use]}
+
+ @LP The @I Use type is defined as follows. The @I StateId argument
+ to the @I State constructor is a unique identifier for the state
+ variable, which can be used to match input and output state
+ variables together. This state id should be unique within either
+ arguments or the return value, but should be matched between them
+ (@I i.e., there should be exactly one @I State use with id 0 in the
+ arguments, and exactly once in the return value) @FootNote{In the
+ current implementation, there is third constructor in the @I Use
+ type for arguments that get higher order functions passed in. Since
+ this is not functional yet, this is left out of this discussion}.
+
+ @ID @Haskell {
+Port
+| State StateId}
+
+ @LP The last type used in this description is @I SignalExpr. It allows for certain
+ expressions in the resulting VHDL, currently only comparison. The type has
+ the following constructors:
+
+ @DP @Tbl
+ aformat { @Cell @I A | @Cell B }
+ {
+ @Rowa
+ A {
+ Sig Signal
+ }
+ B {
+ A simple signal reference.
+ }
+ @Rowa
+ A {
+ Literal String
+ }
+ B {
+ A literal value. The given string is the resulting VHDL value. Perhaps
+ this should better be something less VHDL-specific, but this works for
+ now.
+ }
+ @Rowa
+ A {
+ Eq Signal Signal
+ }
+ B {
+ Equality comparison. This is the only comparison supported for now and
+ results in a @I Bool signal that can be used in a @I CondDef.
+ }
+ }
+
+ @BeginSubSections
+ @SubSection
+ @Title {Examples}
+ @Begin
+ @LP To ease the understanding of this datastructure, we will show
+ the flattened equivalents of some of the examples given in
+ @SectionLink {CoreExamples}. When reading these examples, don't
+ forget that the signal id's themselves are interchangeable, as
+ long as they remain unique and still link the right expressions
+ together (@I i.e., the actual numbers are not relevant).
+ @Display @Heading {Fixed value}
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {high = High}}
+
+ @LP This Haskell source results in the following @I FlatFunction.
+
+ @CNP @ID @Example
+ @Title {Flat function}
+ {
+ @LP @Haskell {
+FlatFunction {
+ sigs = [0],
+ args = [],
+ res = (Single 0),
+ defs = [UncondDef (Lit "'1'") 0]
+}}}
+
+ @Display @Heading {Function application}
+
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell {
+or :: Bit -> Bit -> Bit
+or a b = inv (nand a b)}}
+
+ @CNP @ID @Example
+ @Title {Flat function}
+ {
+ @LP @Haskell {
+FlatFunction {
+ sigs = [0, 1, 2, 3],
+ args = [Single 0, Single 1],
+ res = (Single 3),
+ defs = [
+ (FApp
+ (HsFunction "nand" [Single Port, Single Port] (Single Port))
+ [Single 0, Single 1]
+ (Single 2)
+ ),
+ (FApp
+ (HsFunction "inv" [Single Port] (Single Port))
+ [Single 2]
+ (Single 3)
+ )
+ ]
+}}}
+
+ @Display @Heading {Pattern matching}
+ @CNP @ID @Example
+ @Title {Haskell code}
+ {
+ @LP @Haskell
+ {
+mux2 :: Bit -> (Bit, Bit) -> Bit
+mux2 Low (a, b) = a
+mux2 High (a, b) = b}}
+
+ @CNP @ID @Example
+ @Title {Flat function}
+ {
+ @LP @Haskell {
+FlatFunction {
+ sigs = [0, 1, 2, 3, 4, 5],
+ args = [Single 0, Tuple [Single 3, Single 4]],
+ res = (Single 5),
+ defs = [
+ (UncondDef (Lit "'1'") 1),
+ (UncondDef (Eq 0 1) 2),
+ (CondDef 2 4 3 5)
+ ]
+}}}
+ @End @SubSection
+ @EndSubSections
+
+@End @Section
+
+@Section
+ @Title {Flattening of functions}
+ @Tag {Flattening}
+@Begin
+ @LP The most interesting part of all this is of course the translation of our
+ Haskell functions (in Core format) to flattened functions. From there, the
+ translation of VHDL is relatively easy.
+
+ @LP This section details the "flattening" of a function. To this end, we define
+ a function @I flattenFunction which translates a (nonrecursive) binding to a
+ flat function.
+
+ @LP The syntax used for these functions is not quite Haskell. It is similar,
+ but some functions can have side effects or don't return any value. They
+ are probably best read as if the lines are sequentially executed (though
+ pattern matching does happen as is normal in Haskell). This notation is
+ probably closest to the monadic "do" notation in Haskell, but with the
+ special syntax for that ("do", "<-" and "return") implied.
+
+ @LP @Haskell {
+flattenFunction (NonRec b expr) =
+ (args, res) = flattenExpr expr
+ defs = getDefs
+ FlatFunction args res defs
+ }
+
+ @LP This is a rather simple function, but it does need some explaination. First
+ of all, the flattenExpr function has some hidden state (In the
+ implementation, this is state is made explicit using a @I State @I Monad).
+ To understand the above function, it is sufficient to know that the @I
+ getDefs function simply returns all the definitions that were added in @I
+ flattenExpr using the @I addDef function.
+
+ @LP Similary, in the below function definitions, the @I insertBind function
+ inserts a signal bind (that binds a haskell identifier to a signal in the
+ flat function) into the current table of binds. This binds remains in the
+ table for any other functions called by the caller of @I insertBind, but is
+ removed again when that caller "returns". The @I lookupBind function looks
+ up such a bind in this table.
+
+ @LP Now, the actual work is done by the @I flattenExpr function, which processes
+ an arbitrary expressions and returns a tuple with argument @I SignalMaps and
+ a result @I SignalMap. The argument @I SignalMaps map each argument that
+ should be passed to the expression (when it has a function type, this is an
+ empty list for non-function types). The signals these maps map to are the
+ signals to which the arguments are assumed to have been assigned. In the
+ final @I FlatFunction, these maps are used to tell to which signals the
+ input ports should be bound.
+
+ @LP The result @I SignalMap is the inverse of the argument @I SIgnalMaps. It
+ contains the signals that the results of the expression are bound to.
+
+ @Display @Heading {Lambda expressions}
+ @LP @Haskell {
+flattenExpr (Lam b expr) =
+ arg = genSignals b
+ insertBind (b <= arg)
+ (args, res) = flattenExpr expr
+ (arg:args, res)
+ }
+
+ @Display @Heading {Variable references}
+ @LP @Haskell {
+flattenExpr (Var id) =
+ case id of
+ ``localid'' ->
+ ([], lookupBind id)
+ () ->
+ ([], Tuple [])
+ ``datacon'' ->
+ res = genSignals id
+ lit = Lit (dataconToLit id)
+ addDef UncondDef {src = lit, dst = s}
+
+ ([], res)
+dataconToLit datacon =
+ case datacon of
+ Low -> "'0'"
+ High -> "'1'"
+ True -> "true"
+ False -> "false"
+ }
+
+ @LP Here, @I genSignals generates a map of new (unique) signals. The structure
+ on the map is based on the (Haskell) type of whatever is passed to it, such
+ as the value bound to the identifier in this case.
+
+ @LP The @I ``datacon'' and @I ``localid'' patterns should be read as matching
+ when the id matching against refers to a local identifier (local to the
+ function or expression containing it) or a dataconstructor (such as @I True,
+ @I False, @I High or @I (,)).
+
+
+ @Display @Heading {Let expressions}
+ @LP @Haskell {
+flattenExpr (Let (NonRec b bexpr) expr) =
+ (b_args, b_res) = flattenExpr bexpr
+ insertBind (b <= b_res)
+ flattenExpr expr
+ }
+
+ @Display @Heading {Case expressions}
+ @LP @Haskell {
+flattenExpr (Case scrut alts) =
+ (args, res) = flattenExpr scrut
+ flattenAlts res alts
+
+flattenAlt scrut alt =
+ TODO
+
+flattenAlts scrut [alt] =
+ flattenAlt scrut alt
+
+flattenAlts scrut (alt:alts)
+ (true_args, true_res) = flattenAlt scrut alt
+ (false_args, false_res) = flattenAlts scrut alts
+ (altcon, bind_vars, expr) = alt
+ lit = Lit (dataconToLit)
+
+ TODO
+ }
+
+ @Display @Heading {Function applications}
+ @LP @Haskell {
+flattenExpr (App f arg) =
+ TODO
+ }
+
+
+@End @Section
+
+# vim: set sts=2 sw=2 expandtab: