X-Git-Url: https://git.stderr.nl/gitweb?p=matthijs%2Fmaster-project%2Fdsd-paper.git;a=blobdiff_plain;f=c%CE%BBash.lhs;h=2adeed6e903eaf48d805917ad38a44e8f94de739;hp=dd08c38a2fcc25c2a5a76ddea67f3e165ffd7ba1;hb=b28d9dd4fcf8b751765edd32494b2e41f9ff5578;hpb=22f8cee93ef48cda79d851f949b6f978a86d7fc3 diff --git "a/c\316\273ash.lhs" "b/c\316\273ash.lhs" index dd08c38..2adeed6 100644 --- "a/c\316\273ash.lhs" +++ "b/c\316\273ash.lhs" @@ -354,9 +354,10 @@ \newenvironment{xlist}[1][\rule{0em}{0em}]{% \begin{list}{}{% \settowidth{\labelwidth}{#1:} - \setlength{\labelsep}{0.5cm} + \setlength{\labelsep}{0.5em} \setlength{\leftmargin}{\labelwidth} \addtolength{\leftmargin}{\labelsep} + \addtolength{\leftmargin}{\parindent} \setlength{\rightmargin}{0pt} \setlength{\listparindent}{\parindent} \setlength{\itemsep}{0 ex plus 0.2ex} @@ -480,9 +481,9 @@ ForSyDe1,Wired,reFLect}. The idea of using functional languages for hardware descriptions started in the early 1980s \cite{Cardelli1981, muFP,DAISY,FHDL}, a time which also saw the birth of the currently popular hardware description languages such as \VHDL. The merit of using a functional language to describe -hardware comes from the fact that basic combinatorial circuits are equivalent -to mathematical functions and that functional languages are very good at -describing and composing mathematical functions. +hardware comes from the fact that combinatorial circuits can be directly +modeled as mathematical functions and that functional languages are very good +at describing and composing mathematical functions. In an attempt to decrease the amount of work involved with creating all the required tooling, such as parsers and type-checkers, many functional hardware @@ -506,15 +507,15 @@ capture certain language constructs, such as Haskell's choice elements available in the functional hardware description languages that are embedded in Haskell as a domain specific languages. As far as the authors know, such extensive support for choice-elements is new in the domain of functional -hardware description language. As the hardware descriptions are plain Haskell -functions, these descriptions can be compiled for simulation using using the -optimizing Haskell compiler \GHC. +hardware description languages. As the hardware descriptions are plain Haskell +functions, these descriptions can be compiled for simulation using an +optimizing Haskell compiler such as the Glasgow Haskell Compiler (\GHC). Where descriptions in a conventional hardware description language have an explicit clock for the purpose state and synchronicity, the clock is implied -in this research. The functions describe the behavior of the hardware between +in this research. A developer describes the behavior of the hardware between clock cycles, as such, only synchronous systems can be described. Many -functional hardware description models signals as a stream of all values over +functional hardware description model signals as a stream of all values over time; state is then modeled as a delay on this stream of values. The approach taken in this research is to make the current state of a circuit part of the input of the function and the updated state part of the output. @@ -524,24 +525,30 @@ functional hardware description language must eventually be converted into a netlist. This research also features a prototype translator called \CLaSH\ (pronounced: clash), which converts the Haskell code to equivalently behaving synthesizable \VHDL\ code, ready to be converted to an actual netlist format -by an optimizing \VHDL\ synthesis tool. +by an (optimizing) \VHDL\ synthesis tool. \section{Hardware description in Haskell} \subsection{Function application} The basic syntactic elements of a functional program are functions and function application. These have a single obvious translation to a - netlist: every function becomes a component, every function argument is an - input port and the result value is of a function is an output port. This - output port can have a complex type (such as a tuple), so having just a - single output port does not create a limitation. Each function application - in turn becomes a component instantiation. Here, the result of each - argument expression is assigned to a signal, which is mapped to the - corresponding input port. The output port of the function is also mapped - to a signal, which is used as the result of the application itself. + netlist format: + \begin{inparaenum} + \item every function is translated to a component, + \item every function argument is translated to an input port, + \item the result value of a function is translated to an output port, + and + \item function applications are translated to component instantiations. + \end{inparaenum} + The output port can have a complex type (such as a tuple), so having just + a single output port does not pose any limitation. The arguments of a + function applications are assigned to a signal, which are then mapped to + the corresponding input ports of the component. The output port of the + function is also mapped to a signal, which is used as the result of the + application itself. Since every top level function generates its own component, the - hierarchy of function calls is reflected in the final netlist aswell, + hierarchy of function calls is reflected in the final netlist,% aswell, creating a hierarchical description of the hardware. This separation in different components makes the resulting \VHDL\ output easier to read and debug. @@ -578,15 +585,20 @@ by an optimizing \VHDL\ synthesis tool. In Haskell, choice can be achieved by a large set of language constructs, consisting of: \hs{case} constructs, \hs{if-then-else} constructs, pattern matching, and guards. The easiest of these are the \hs{case} - constructs (and \hs{if} expressions, which can be very directly translated - to \hs{case} expressions). A \hs{case} expression can in turn simply be - translated to a conditional assignment in \VHDL, where the conditions use - equality comparisons against the constructors in the \hs{case} - expressions. We can see two versions of a contrived example, the first + constructs (\hs{if} expressions can be very directly translated to + \hs{case} expressions). A \hs{case} construct is translated to a + multiplexer, where the control value is linked to the selection port and + the output of each case is linked to the corresponding input port on the + multiplexer. + % A \hs{case} expression can in turn simply be translated to a conditional + % assignment in \VHDL, where the conditions use equality comparisons + % against the constructors in the \hs{case} expressions. + We can see two versions of a contrived example below, the first using a \hs{case} construct and the other using a \hs{if-then-else} constructs, in the code below. The example sums two values when they are equal or non-equal (depending on the predicate given) and returns 0 - otherwise. + otherwise. Both versions of the example roughly correspond to the same + netlist, which is depicted in \Cref{img:choice}. \begin{code} sumif pred a b = case pred of @@ -606,9 +618,6 @@ by an optimizing \VHDL\ synthesis tool. if a != b then a + b else 0 \end{code} - Both versions of the example correspond to the same netlist, which is - depicted in \Cref{img:choice}. - \begin{figure} \centerline{\includegraphics{choice-case}} \caption{Choice - sumif} @@ -619,22 +628,19 @@ by an optimizing \VHDL\ synthesis tool. matching. A function can be defined in multiple clauses, where each clause specifies a pattern. When the arguments match the pattern, the corresponding clause will be used. Expressions can also contain guards, - where the expression is only executed if the guard evaluates to true. A - pattern match (with optional guards) can be to a conditional assignments - in \VHDL, where the conditions are an equality test of the argument and - one of the patterns (combined with the guard if was present). A third - version of the earlier example, using both pattern matching and guards, - can be seen below: + where the expression is only executed if the guard evaluates to true. Like + \hs{if-then-else} constructs, pattern matching and guards have a + (straightforward) translation to \hs{case} constructs and can as such be + mapped to multiplexers. A third version of the earlier example, using both + pattern matching and guards, can be seen below. The version using pattern + matching and guards also has roughly the same netlist representation + (\Cref{img:choice}) as the earlier two versions of the example. \begin{code} sumif Eq a b | a == b = a + b sumif Neq a b | a != b = a + b sumif _ _ _ = 0 \end{code} - - The version using pattern matching and guards has the same netlist - representation (\Cref{img:choice}) as the earlier two versions of the - example. % \begin{figure} % \centerline{\includegraphics{choice-ifthenelse}} @@ -643,14 +649,17 @@ by an optimizing \VHDL\ synthesis tool. % \end{figure} \subsection{Types} - Haskell is a strongly-typed language, meaning that the type of a variable - or function is determined at compile-time. Not all of Haskell's typing - constructs have a clear translation to hardware, as such this section will - only deal with the types that do have a clear correspondence to hardware. - The translatable types are divided into two categories: \emph{built-in} - types and \emph{user-defined} types. Built-in types are those types for - which a direct translation is defined within the \CLaSH\ compiler; the - term user-defined types should not require any further elaboration. + Haskell is a statically-typed language, meaning that the type of a + variable or function is determined at compile-time. Not all of Haskell's + typing constructs have a clear translation to hardware, as such this + section will only deal with the types that do have a clear correspondence + to hardware. The translatable types are divided into two categories: + \emph{built-in} types and \emph{user-defined} types. Built-in types are + those types for which a direct translation is defined within the \CLaSH\ + compiler; the term user-defined types should not require any further + elaboration. The translatable types are also inferable by the compiler, + meaning that a developer does not have to annotate every function with a + type signature. % Translation of two most basic functional concepts has been % discussed: function application and choice. Before looking further @@ -668,6 +677,8 @@ by an optimizing \VHDL\ synthesis tool. % using translation rules that are discussed later on. \subsubsection{Built-in types} + The following types have direct translation defined within the \CLaSH\ + compiler: \begin{xlist} \item[\bf{Bit}] This is the most basic type available. It can have two values: @@ -702,7 +713,9 @@ by an optimizing \VHDL\ synthesis tool. This is a vector type that can contain elements of any other type and has a fixed length. The \hs{Vector} type constructor takes two type arguments: the length of the vector and the type of the elements - contained in it. + contained in it. The short-hand notation used for the vector type in + the rest of paper is: \hs{[a|n]}. Where the \hs{a} is the element + type, and \hs{n} is the length of the vector. % The state type of an 8 element register bank would then for example % be: @@ -716,12 +729,12 @@ by an optimizing \VHDL\ synthesis tool. % (The 32 bit word type as defined above). In other words, the % \hs{RegisterState} type is a vector of 8 32-bit words. A fixed size % vector is translated to a \VHDL\ array type. - \item[\bf{RangedWord}] + \item[\bf{Index}] This is another type to describe integers, but unlike the previous two it has no specific bit-width, but an upper bound. This means that its range is not limited to powers of two, but can be any number. - A \hs{RangedWord} only has an upper bound, its lower bound is - implicitly zero. The main purpose of the \hs{RangedWord} type is to be + An \hs{Index} only has an upper bound, its lower bound is + implicitly zero. The main purpose of the \hs{Index} type is to be used as an index to a \hs{Vector}. % \comment{TODO: Perhaps remove this example?} To define an index for @@ -742,194 +755,187 @@ by an optimizing \VHDL\ synthesis tool. \subsubsection{User-defined types} There are three ways to define new types in Haskell: algebraic data-types with the \hs{data} keyword, type synonyms with the \hs{type} - keyword and datatype renamings with the \hs{newtype} keyword. \GHC\ - offers a few more advanced ways to introduce types (type families, - existential typing, {\small{GADT}}s, etc.) which are not standard - Haskell. These are not currently supported. + keyword and datatype renaming constructs with the \hs{newtype} keyword. + \GHC\ offers a few more advanced ways to introduce types (type families, + existential typing, {\small{GADT}}s, etc.) which are not standard Haskell. + As it is currently unclear how these advanced type constructs correspond + with hardware, they are for now unsupported by the \CLaSH\ compiler Only an algebraic datatype declaration actually introduces a - completely new type, for which we provide the \VHDL\ translation - below. Type synonyms and renamings only define new names for - existing types, where synonyms are completely interchangeable and - renamings need explicit conversiona. Therefore, these do not need - any particular \VHDL\ translation, a synonym or renamed type will - just use the same representation as the original type. The - distinction between a renaming and a synonym does no longer matter - in hardware and can be disregarded in the generated \VHDL. For algebraic - types, we can make the following distinction: + completely new type. Type synonyms and renaming constructs only define new + names for existing types, where synonyms are completely interchangeable + and renaming constructs need explicit conversions. Therefore, these do not + need any particular translation, a synonym or renamed type will just use + the same representation as the original type. For algebraic types, we can + make the following distinctions: \begin{xlist} \item[\bf{Single constructor}] Algebraic datatypes with a single constructor with one or more fields, are essentially a way to pack a few values together in a - record-like structure. An example of such a type is the following pair - of integers: - + record-like structure. Haskell's built-in tuple types are also defined + as single constructor algebraic types An example of a single + constructor type is the following pair of integers: \begin{code} data IntPair = IntPair Int Int \end{code} - - Haskell's builtin tuple types are also defined as single - constructor algebraic types and are translated according to this - rule by the \CLaSH\ compiler. These types are translated to \VHDL\ - record types, with one field for every field in the constructor. + % These types are translated to \VHDL\ record types, with one field + % for every field in the constructor. \item[\bf{No fields}] Algebraic datatypes with multiple constructors, but without any fields are essentially a way to get an enumeration-like type containing alternatives. Note that Haskell's \hs{Bool} type is also defined as an enumeration type, but we have a fixed translation for - that. These types are translated to \VHDL\ enumerations, with one - value for each constructor. This allows references to these - constructors to be translated to the corresponding enumeration value. + that. An example of such an enum type is the type that represents the + colors in a traffic light: + \begin{code} + data TrafficLight = Red | Orange | Green + \end{code} + % These types are translated to \VHDL\ enumerations, with one + % value for each constructor. This allows references to these + % constructors to be translated to the corresponding enumeration + % value. \item[\bf{Multiple constructors with fields}] Algebraic datatypes with multiple constructors, where at least one of these constructors has one or more fields are not currently supported. \end{xlist} - \subsection{Polymorphic functions} - A powerful construct in most functional language is polymorphism. - This means the arguments of a function (and consequentially, values - within the function as well) do not need to have a fixed type. - Haskell supports \emph{parametric polymorphism}, meaning a - function's type can be parameterized with another type. - - As an example of a polymorphic function, consider the following - \hs{append} function's type: - - \comment{TODO: Use vectors instead of lists?} + \subsection{Polymorphism} + A powerful construct in most functional languages is polymorphism, it + allows a function to handle values of different data types in a uniform + way. Haskell supports \emph{parametric polymorphism}~\cite{polymorphism}, + meaning functions can be written without mention of any specific type and + can be used transparently with any number of new types. + As an example of a parametric polymorphic function, consider the type of + the following \hs{append} function, which appends an element to a vector: \begin{code} - append :: [a] -> a -> [a] + append :: [a|n] -> a -> [a|n + 1] \end{code} This type is parameterized by \hs{a}, which can contain any type at - all. This means that append can append an element to a list, - regardless of the type of the elements in the list (but the element - added must match the elements in the list, since there is only one - \hs{a}). - - This kind of polymorphism is extremely useful in hardware designs to - make operations work on a vector without knowing exactly what elements - are inside, routing signals without knowing exactly what kinds of - signals these are, or working with a vector without knowing exactly - how long it is. Polymorphism also plays an important role in most - higher order functions, as we will see in the next section. - - The previous example showed unconstrained polymorphism \comment{(TODO: How - is this really called?)}: \hs{a} can have \emph{any} type. - Furthermore,Haskell supports limiting the types of a type parameter to - specific class of types. An example of such a type class is the - \hs{Num} class, which contains all of Haskell's numerical types. - - Now, take the addition operator, which has the following type: - + all. This means that \hs{append} can append an element to a vector, + regardless of the type of the elements in the list (as long as the type of + the value to be added is of the same type as the values in the vector). + This kind of polymorphism is extremely useful in hardware designs to make + operations work on a vector without knowing exactly what elements are + inside, routing signals without knowing exactly what kinds of signals + these are, or working with a vector without knowing exactly how long it + is. Polymorphism also plays an important role in most higher order + functions, as we will see in the next section. + + Another type of polymorphism is \emph{ad-hoc + polymorphism}~\cite{polymorphism}, which refers to polymorphic + functions which can be applied to arguments of different types, but which + behave differently depending on the type of the argument to which they are + applied. In Haskell, ad-hoc polymorphism is achieved through the use of + type classes, where a class definition provides the general interface of a + function, and class instances define the functionality for the specific + types. An example of such a type class is the \hs{Num} class, which + contains all of Haskell's numerical operations. A developer can make use + of this ad-hoc polymorphism by adding a constraint to a parametrically + polymorphic type variable. Such a constraint indicates that the type + variable can only be instantiated to a type whose members supports the + overloaded functions associated with the type class. + + As an example we will take a look at type signature of the function + \hs{sum}, which sums the values in a vector: \begin{code} - (+) :: Num a => a -> a -> a + sum :: Num a => [a|n] -> a \end{code} This type is again parameterized by \hs{a}, but it can only contain - types that are \emph{instances} of the \emph{type class} \hs{Num}. - Our numerical built-in types are also instances of the \hs{Num} + types that are \emph{instances} of the \emph{type class} \hs{Num}, so that + we know that the addition (+) operator is defined for that type. + \CLaSH's built-in numerical types are also instances of the \hs{Num} class, so we can use the addition operator on \hs{SizedWords} as - well as on {SizedInts}. + well as on \hs{SizedInts}. - In \CLaSH, unconstrained polymorphism is completely supported. Any - function defined can have any number of unconstrained type - parameters. The \CLaSH\ compiler will infer the type of every such - argument depending on how the function is applied. There is one - exception to this: The top level function that is translated, can - not have any polymorphic arguments (since it is never applied, so - there is no way to find out the actual types for the type - parameters). + In \CLaSH, parametric polymorphism is completely supported. Any function + defined can have any number of unconstrained type parameters. The \CLaSH\ + compiler will infer the type of every such argument depending on how the + function is applied. There is one exception to this: The top level + function that is translated, can not have any polymorphic arguments (as + they are never applied, so there is no way to find out the actual types + for the type parameters). \CLaSH\ does not support user-defined type classes, but does use some - of the builtin ones for its builtin functions (like \hs{Num} and - \hs{Eq}). + of the built-in type classes for its built-in function, such as: \hs{Num} + for numerical operations, \hs{Eq} for the equality operators, and + \hs{Ord} for the comparison/order operators. - \subsection{Higher order} + \subsection{Higher-order functions \& values} Another powerful abstraction mechanism in functional languages, is - the concept of \emph{higher order functions}, or \emph{functions as + the concept of \emph{higher-order functions}, or \emph{functions as a first class value}. This allows a function to be treated as a value and be passed around, even as the argument of another - function. Let's clarify that with an example: + function. The following example should clarify this concept: \begin{code} - notList xs = map not xs + negVector xs = map not xs \end{code} - This defines a function \hs{notList}, with a single list of booleans - \hs{xs} as an argument, which simply negates all of the booleans in - the list. To do this, it uses the function \hs{map}, which takes - \emph{another function} as its first argument and applies that other - function to each element in the list, returning again a list of the - results. - - As you can see, the \hs{map} function is a higher order function, - since it takes another function as an argument. Also note that - \hs{map} is again a polymorphic function: It does not pose any - constraints on the type of elements in the list passed, other than - that it must be the same as the type of the argument the passed - function accepts. The type of elements in the resulting list is of - course equal to the return type of the function passed (which need - not be the same as the type of elements in the input list). Both of - these can be readily seen from the type of \hs{map}: + The code above defines a function \hs{negVector}, which takes a vector of + booleans, and returns a vector where all the values are negated. It + achieves this by calling the \hs{map} function, and passing it + \emph{another function}, boolean negation, and the vector of booleans, + \hs{xs}. The \hs{map} function applies the negation function to all the + elements in the vector. + + The \hs{map} function is called a higher-order function, since it takes + another function as an argument. Also note that \hs{map} is again a + parametric polymorphic function: It does not pose any constraints on the + type of the vector elements, other than that it must be the same type as + the input type of the function passed to \hs{map}. The element type of the + resulting vector is equal to the return type of the function passed, which + need not necessarily be the same as the element type of the input vector. + All of these characteristics can readily be inferred from the type + signature belonging to \hs{map}: \begin{code} - map :: (a -> b) -> [a] -> [b] + map :: (a -> b) -> [a|n] -> [b|n] \end{code} - As an example from a common hardware design, let's look at the - equation of a FIR filter. + As an example of a common hardware design where the use of higher-order + functions leads to a very natural description is a FIR filter, which is + basically the dot-product of two vectors: \begin{equation} y_t = \sum\nolimits_{i = 0}^{n - 1} {x_{t - i} \cdot h_i } \end{equation} + + A FIR filter multiplies fixed constants ($h$) with the current + and a few previous input samples ($x$). Each of these multiplications + are summed, to produce the result at time $t$. The equation of a FIR + filter is indeed equivalent to the equation of the dot-product, which is + shown below: + + \begin{equation} + \mathbf{x}\bullet\mathbf{y} = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot y_i } + \end{equation} - A FIR filter multiplies fixed constants ($h$) with the current and - a few previous input samples ($x$). Each of these multiplications - are summed, to produce the result at time $t$. - - This is easily and directly implemented using higher order - functions. Consider that the vector \hs{hs} contains the FIR - coefficients and the vector \hs{xs} contains the current input sample - in front and older samples behind. How \hs{xs} gets its value will be - show in the next section about state. + We can easily and directly implement the equation for the dot-product + using higher-order functions: \begin{code} - fir ... = foldl1 (+) (zipwith (*) xs hs) + xs *+* ys = foldl1 (+) (zipWith (*) xs hs) \end{code} - Here, the \hs{zipwith} function is very similar to the \hs{map} - function: It takes a function two lists and then applies the - function to each of the elements of the two lists pairwise - (\emph{e.g.}, \hs{zipwith (+) [1, 2] [3, 4]} becomes - \hs{[1 + 3, 2 + 4]}. - - The \hs{foldl1} function takes a function and a single list and applies the - function to the first two elements of the list. It then applies to - function to the result of the first application and the next element - from the list. This continues until the end of the list is reached. - The result of the \hs{foldl1} function is the result of the last - application. - - As you can see, the \hs{zipwith (*)} function is just pairwise + The \hs{zipWith} function is very similar to the \hs{map} function: It + takes a function, two vectors, and then applies the function to each of + the elements in the two vectors pairwise (\emph{e.g.}, \hs{zipWith (*) [1, + 2] [3, 4]} becomes \hs{[1 * 3, 2 * 4]} $\equiv$ \hs{[3,8]}). + + The \hs{foldl1} function takes a function, a single vector, and applies + the function to the first two elements of the vector. It then applies the + function to the result of the first application and the next element from + the vector. This continues until the end of the vector is reached. The + result of the \hs{foldl1} function is the result of the last application. + As you can see, the \hs{zipWith (*)} function is just pairwise multiplication and the \hs{foldl1 (+)} function is just summation. - To make the correspondence between the code and the equation even - more obvious, we turn the list of input samples in the equation - around. So, instead of having the the input sample received at time - $t$ in $x_t$, $x_0$ now always stores the current sample, and $x_i$ - stores the $ith$ previous sample. This changes the equation to the - following (Note that this is completely equivalent to the original - equation, just with a different definition of $x$ that better suits - the \hs{x} from the code): - - \begin{equation} - y_t = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot h_i } - \end{equation} - - So far, only functions have been used as higher order values. In + So far, only functions have been used as higher-order values. In Haskell, there are two more ways to obtain a function-typed value: partial application and lambda abstraction. Partial application means that a function that takes multiple arguments can be applied @@ -943,17 +949,15 @@ by an optimizing \VHDL\ synthesis tool. Here, the expression \hs{(+) 1} is the partial application of the plus operator to the value \hs{1}, which is again a function that - adds one to its argument. - - A labmda expression allows one to introduce an anonymous function - in any expression. Consider the following expression, which again - adds one to every element of a list: + adds one to its argument. A lambda expression allows one to introduce an + anonymous function in any expression. Consider the following expression, + which again adds one to every element of a vector: \begin{code} map (\x -> x + 1) xs \end{code} - Finally, higher order arguments are not limited to just builtin + Finally, higher order arguments are not limited to just built-in functions, but any function defined in \CLaSH\ can have function arguments. This allows the hardware designer to use a powerful abstraction mechanism in his designs and have an optimal amount of @@ -977,38 +981,108 @@ by an optimizing \VHDL\ synthesis tool. \item when the function is called, it should not have observable side-effects. \end{inparaenum} - This purity property is important for functional languages, since it - enables all kinds of mathematical reasoning that could not be guaranteed - correct for impure functions. Pure functions are as such a perfect match - for a combinatorial circuit, where the output solely depends on the - inputs. When a circuit has state however, it can no longer be simply - described by a pure function. Simply removing the purity property is not a - valid option, as the language would then lose many of it mathematical - properties. In an effort to include the concept of state in pure + % This purity property is important for functional languages, since it + % enables all kinds of mathematical reasoning that could not be guaranteed + % correct for impure functions. + Pure functions are as such a perfect match or a combinatorial circuit, + where the output solely depends on the inputs. When a circuit has state + however, it can no longer be simply described by a pure function. + % Simply removing the purity property is not a valid option, as the + % language would then lose many of it mathematical properties. + In an effort to include the concept of state in pure functions, the current value of the state is made an argument of the - function; the updated state becomes part of the result. + function; the updated state becomes part of the result. In this sense the + descriptions made in \CLaSH are the describing the combinatorial parts of + a mealy machine. + + A simple example is adding an accumulator register to the earlier + multiply-accumulate circuit, of which the resulting netlist can be seen in + \Cref{img:mac-state}: - A simple example is the description of an accumulator circuit: \begin{code} - macS a b (State c) = (State c', outp) + macS (State c) a b = (State c', outp) where outp = mac a b c c' = outp \end{code} + \begin{figure} \centerline{\includegraphics{mac-state}} \caption{Stateful Multiply-Accumulate} \label{img:mac-state} \end{figure} - This approach makes the state of a function very explicit: which variables - are part of the state is completely determined by the type signature. This - approach to state is well suited to be used in combination with the - existing code and language features, such as all the choice constructs, as - state values are just normal values. + + The \hs{State} keyword indicates which arguments are part of the current + state, and what part of the output is part of the updated state. This + aspect will also reflected in the type signature of the function. + Abstracting the state of a circuit in this way makes it very explicit: + which variables are part of the state is completely determined by the + type signature. This approach to state is well suited to be used in + combination with the existing code and language features, such as all the + choice constructs, as state values are just normal values. + + We can simulate stateful descriptions using the recursive \hs{run} + function: + + \begin{code} + run f s (i:inps) = o : (run f s' inps) + where + (s', o) = f s i + \end{code} + + The \hs{run} function maps a list of inputs over the function that a + developer wants to simulate, passing the state to each new iteration. Each + value in the input list corresponds to exactly one cycle of the (implicit) + clock. The result of the simulation is a list of outputs for every clock + cycle. As both the \hs{run} function and the hardware description are + plain hardware, the complete simulation can be compiled by an optimizing + Haskell compiler. + \section{\CLaSH\ prototype} foo\par bar +\section{Use cases} +Returning to the example of the FIR filter, we will slightly change the +equation belong to it, so as to make the translation to code more obvious. +What we will do is change the definition of the vector of input samples. +So, instead of having the input sample received at time +$t$ stored in $x_t$, $x_0$ now always stores the current sample, and $x_i$ +stores the $ith$ previous sample. This changes the equation to the +following (Note that this is completely equivalent to the original +equation, just with a different definition of $x$ that will better suit +the the transformation to code): + +\begin{equation} +y_t = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot h_i } +\end{equation} + +Consider that the vector \hs{hs} contains the FIR coefficients and the +vector \hs{xs} contains the current input sample in front and older +samples behind. The function that does this shifting of the input samples +is shown below: + +\begin{code} +x >> xs = x +> tail xs +\end{code} + +Where the \hs{tail} function returns all but the first element of a +vector, and the concatenate operator ($\succ$) adds a new element to the +left of a vector. The complete definition of the FIR filter then becomes: + +\begin{code} +fir (State (xs,hs)) x = (State (x >> xs,hs), xs *+* hs) +\end{code} + +The resulting netlist of a 4-taps FIR filter based on the above definition +is depicted in \Cref{img:4tapfir}. + +\begin{figure} +\centerline{\includegraphics{4tapfir}} +\caption{4-taps FIR Filter} +\label{img:4tapfir} +\end{figure} + \section{Related work} Many functional hardware description languages have been developed over the years. Early work includes such languages as $\mu$\acro{FP}~\cite{muFP}, an