X-Git-Url: https://git.stderr.nl/gitweb?p=matthijs%2Fmaster-project%2Fdsd-paper.git;a=blobdiff_plain;f=c%CE%BBash.lhs;h=2adeed6e903eaf48d805917ad38a44e8f94de739;hp=1dd21ce530709fccda16e1fc74af67c2d0ccf7fc;hb=b28d9dd4fcf8b751765edd32494b2e41f9ff5578;hpb=6835a7a214399baa57eec06aa5698932c4184d42 diff --git "a/c\316\273ash.lhs" "b/c\316\273ash.lhs" index 1dd21ce..2adeed6 100644 --- "a/c\316\273ash.lhs" +++ "b/c\316\273ash.lhs" @@ -865,7 +865,7 @@ by an (optimizing) \VHDL\ synthesis tool. for numerical operations, \hs{Eq} for the equality operators, and \hs{Ord} for the comparison/order operators. - \subsection{Higher-order functions} + \subsection{Higher-order functions \& values} Another powerful abstraction mechanism in functional languages, is the concept of \emph{higher-order functions}, or \emph{functions as a first class value}. This allows a function to be treated as a @@ -897,57 +897,45 @@ by an (optimizing) \VHDL\ synthesis tool. map :: (a -> b) -> [a|n] -> [b|n] \end{code} - As an example from a common hardware design, let's look at the - equation of a FIR filter. + As an example of a common hardware design where the use of higher-order + functions leads to a very natural description is a FIR filter, which is + basically the dot-product of two vectors: \begin{equation} y_t = \sum\nolimits_{i = 0}^{n - 1} {x_{t - i} \cdot h_i } \end{equation} + + A FIR filter multiplies fixed constants ($h$) with the current + and a few previous input samples ($x$). Each of these multiplications + are summed, to produce the result at time $t$. The equation of a FIR + filter is indeed equivalent to the equation of the dot-product, which is + shown below: + + \begin{equation} + \mathbf{x}\bullet\mathbf{y} = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot y_i } + \end{equation} - A FIR filter multiplies fixed constants ($h$) with the current and - a few previous input samples ($x$). Each of these multiplications - are summed, to produce the result at time $t$. - - This is easily and directly implemented using higher order - functions. Consider that the vector \hs{hs} contains the FIR - coefficients and the vector \hs{xs} contains the current input sample - in front and older samples behind. How \hs{xs} gets its value will be - show in the next section about state. + We can easily and directly implement the equation for the dot-product + using higher-order functions: \begin{code} - fir {-"$\ldots$"-} = foldl1 (+) (zipwith (*) xs hs) + xs *+* ys = foldl1 (+) (zipWith (*) xs hs) \end{code} - Here, the \hs{zipwith} function is very similar to the \hs{map} - function: It takes a function two lists and then applies the - function to each of the elements of the two lists pairwise - (\emph{e.g.}, \hs{zipwith (+) [1, 2] [3, 4]} becomes - \hs{[1 + 3, 2 + 4]}. - - The \hs{foldl1} function takes a function and a single list and applies the - function to the first two elements of the list. It then applies to - function to the result of the first application and the next element - from the list. This continues until the end of the list is reached. - The result of the \hs{foldl1} function is the result of the last - application. - - As you can see, the \hs{zipwith (*)} function is just pairwise + The \hs{zipWith} function is very similar to the \hs{map} function: It + takes a function, two vectors, and then applies the function to each of + the elements in the two vectors pairwise (\emph{e.g.}, \hs{zipWith (*) [1, + 2] [3, 4]} becomes \hs{[1 * 3, 2 * 4]} $\equiv$ \hs{[3,8]}). + + The \hs{foldl1} function takes a function, a single vector, and applies + the function to the first two elements of the vector. It then applies the + function to the result of the first application and the next element from + the vector. This continues until the end of the vector is reached. The + result of the \hs{foldl1} function is the result of the last application. + As you can see, the \hs{zipWith (*)} function is just pairwise multiplication and the \hs{foldl1 (+)} function is just summation. - To make the correspondence between the code and the equation even - more obvious, we turn the list of input samples in the equation - around. So, instead of having the the input sample received at time - $t$ in $x_t$, $x_0$ now always stores the current sample, and $x_i$ - stores the $ith$ previous sample. This changes the equation to the - following (Note that this is completely equivalent to the original - equation, just with a different definition of $x$ that better suits - the \hs{x} from the code): - - \begin{equation} - y_t = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot h_i } - \end{equation} - - So far, only functions have been used as higher order values. In + So far, only functions have been used as higher-order values. In Haskell, there are two more ways to obtain a function-typed value: partial application and lambda abstraction. Partial application means that a function that takes multiple arguments can be applied @@ -961,17 +949,15 @@ by an (optimizing) \VHDL\ synthesis tool. Here, the expression \hs{(+) 1} is the partial application of the plus operator to the value \hs{1}, which is again a function that - adds one to its argument. - - A labmda expression allows one to introduce an anonymous function - in any expression. Consider the following expression, which again - adds one to every element of a list: + adds one to its argument. A lambda expression allows one to introduce an + anonymous function in any expression. Consider the following expression, + which again adds one to every element of a vector: \begin{code} map (\x -> x + 1) xs \end{code} - Finally, higher order arguments are not limited to just builtin + Finally, higher order arguments are not limited to just built-in functions, but any function defined in \CLaSH\ can have function arguments. This allows the hardware designer to use a powerful abstraction mechanism in his designs and have an optimal amount of @@ -995,22 +981,26 @@ by an (optimizing) \VHDL\ synthesis tool. \item when the function is called, it should not have observable side-effects. \end{inparaenum} - This purity property is important for functional languages, since it - enables all kinds of mathematical reasoning that could not be guaranteed - correct for impure functions. Pure functions are as such a perfect match - for a combinatorial circuit, where the output solely depends on the - inputs. When a circuit has state however, it can no longer be simply - described by a pure function. Simply removing the purity property is not a - valid option, as the language would then lose many of it mathematical - properties. In an effort to include the concept of state in pure + % This purity property is important for functional languages, since it + % enables all kinds of mathematical reasoning that could not be guaranteed + % correct for impure functions. + Pure functions are as such a perfect match or a combinatorial circuit, + where the output solely depends on the inputs. When a circuit has state + however, it can no longer be simply described by a pure function. + % Simply removing the purity property is not a valid option, as the + % language would then lose many of it mathematical properties. + In an effort to include the concept of state in pure functions, the current value of the state is made an argument of the - function; the updated state becomes part of the result. A simple example - is adding an accumulator register to the earlier multiply-accumulate - circuit, of which the resulting netlist can be seen in + function; the updated state becomes part of the result. In this sense the + descriptions made in \CLaSH are the describing the combinatorial parts of + a mealy machine. + + A simple example is adding an accumulator register to the earlier + multiply-accumulate circuit, of which the resulting netlist can be seen in \Cref{img:mac-state}: \begin{code} - macS a b (State c) = (State c', outp) + macS (State c) a b = (State c', outp) where outp = mac a b c c' = outp @@ -1022,15 +1012,77 @@ by an (optimizing) \VHDL\ synthesis tool. \label{img:mac-state} \end{figure} - This approach makes the state of a circuit very explicit: which variables - are part of the state is completely determined by the type signature. This - approach to state is well suited to be used in combination with the - existing code and language features, such as all the choice constructs, as - state values are just normal values. + The \hs{State} keyword indicates which arguments are part of the current + state, and what part of the output is part of the updated state. This + aspect will also reflected in the type signature of the function. + Abstracting the state of a circuit in this way makes it very explicit: + which variables are part of the state is completely determined by the + type signature. This approach to state is well suited to be used in + combination with the existing code and language features, such as all the + choice constructs, as state values are just normal values. + + We can simulate stateful descriptions using the recursive \hs{run} + function: + + \begin{code} + run f s (i:inps) = o : (run f s' inps) + where + (s', o) = f s i + \end{code} + + The \hs{run} function maps a list of inputs over the function that a + developer wants to simulate, passing the state to each new iteration. Each + value in the input list corresponds to exactly one cycle of the (implicit) + clock. The result of the simulation is a list of outputs for every clock + cycle. As both the \hs{run} function and the hardware description are + plain hardware, the complete simulation can be compiled by an optimizing + Haskell compiler. + \section{\CLaSH\ prototype} foo\par bar +\section{Use cases} +Returning to the example of the FIR filter, we will slightly change the +equation belong to it, so as to make the translation to code more obvious. +What we will do is change the definition of the vector of input samples. +So, instead of having the input sample received at time +$t$ stored in $x_t$, $x_0$ now always stores the current sample, and $x_i$ +stores the $ith$ previous sample. This changes the equation to the +following (Note that this is completely equivalent to the original +equation, just with a different definition of $x$ that will better suit +the the transformation to code): + +\begin{equation} +y_t = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot h_i } +\end{equation} + +Consider that the vector \hs{hs} contains the FIR coefficients and the +vector \hs{xs} contains the current input sample in front and older +samples behind. The function that does this shifting of the input samples +is shown below: + +\begin{code} +x >> xs = x +> tail xs +\end{code} + +Where the \hs{tail} function returns all but the first element of a +vector, and the concatenate operator ($\succ$) adds a new element to the +left of a vector. The complete definition of the FIR filter then becomes: + +\begin{code} +fir (State (xs,hs)) x = (State (x >> xs,hs), xs *+* hs) +\end{code} + +The resulting netlist of a 4-taps FIR filter based on the above definition +is depicted in \Cref{img:4tapfir}. + +\begin{figure} +\centerline{\includegraphics{4tapfir}} +\caption{4-taps FIR Filter} +\label{img:4tapfir} +\end{figure} + \section{Related work} Many functional hardware description languages have been developed over the years. Early work includes such languages as $\mu$\acro{FP}~\cite{muFP}, an