From: Matthijs Kooijman Date: Fri, 19 Feb 2010 10:30:32 +0000 (+0100) Subject: Add section on higher order functions. X-Git-Url: https://git.stderr.nl/gitweb?p=matthijs%2Fmaster-project%2Fdsd-paper.git;a=commitdiff_plain;h=567bd3d2345281a3796ccf4fb66f79c50e3558ed Add section on higher order functions. --- diff --git "a/c\316\273ash.lhs" "b/c\316\273ash.lhs" index 5819c83..3cbf301 100644 --- "a/c\316\273ash.lhs" +++ "b/c\316\273ash.lhs" @@ -810,6 +810,120 @@ data IntPair = IntPair Int Int of the builtin ones for its builtin functions (like \hs{Num} and \hs{Eq}). + \subsection{Higher order} + Another powerful abstraction mechanism in functional languages, is + the concept of \emph{higher order functions}, or \emph{functions as + a first class value}. This allows a function to be treated as a + value and be passed around, even as the argument of another + function. Let's clarify that with an example: + + \begin{code} + notList xs = map not xs + \end{code} + + This defines a function \hs{notList}, with a single list of booleans + \hs{xs} as an argument, which simply negates all of the booleans in + the list. To do this, it uses the function \hs{map}, which takes + \emph{another function} as its first argument and applies that other + function to each element in the list, returning again a list of the + results. + + As you can see, the \hs{map} function is a higher order function, + since it takes another function as an argument. Also note that + \hs{map} is again a polymorphic function: It does not pose any + constraints on the type of elements in the list passed, other than + that it must be the same as the type of the argument the passed + function accepts. The type of elements in the resulting list is of + course equal to the return type of the function passed (which need + not be the same as the type of elements in the input list). Both of + these can be readily seen from the type of \hs{map}: + + \begin{code} + map :: (a -> b) -> [a] -> [b] + \end{code} + + As an example from a common hardware design, let's look at the + equation of a FIR filter. + + \begin{equation} + y_t = \sum\nolimits_{i = 0}^{n - 1} {x_{t - i} \cdot h_i } + \end{equation} + + A FIR filter multiplies fixed constants ($h$) with the current and + a few previous input samples ($x$). Each of these multiplications + are summed, to produce the result at time $t$. + + This is easily and directly implemented using higher order + functions. Consider that the vector \hs{hs} contains the FIR + coefficients and the vector \hs{xs} contains the current input sample + in front and older samples behind. How \hs{xs} gets its value will be + show in the next section about state. + + \begin{code} + fir ... = foldl1 (+) (zipwith (*) xs hs) + \end{code} + + Here, the \hs{zipwith} function is very similar to the \hs{map} + function: It takes a function two lists and then applies the + function to each of the elements of the two lists pairwise + (\emph{e.g.}, \hs{zipwith (+) [1, 2] [3, 4]} becomes + \hs{[1 + 3, 2 + 4]}. + + The \hs{foldl1} function takes a function and a single list and applies the + function to the first two elements of the list. It then applies to + function to the result of the first application and the next element + from the list. This continues until the end of the list is reached. + The result of the \hs{foldl1} function is the result of the last + application. + + As you can see, the \hs{zipwith (*)} function is just pairwise + multiplication and the \hs{foldl1 (+)} function is just summation. + + To make the correspondence between the code and the equation even + more obvious, we turn the list of input samples in the equation + around. So, instead of having the the input sample received at time + $t$ in $x_t$, $x_0$ now always stores the current sample, and $x_i$ + stores the $ith$ previous sample. This changes the equation to the + following (Note that this is completely equivalent to the original + equation, just with a different definition of $x$ that better suits + the \hs{x} from the code): + + \begin{equation} + y_t = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot h_i } + \end{equation} + + So far, only functions have been used as higher order values. In + Haskell, there are two more ways to obtain a function-typed value: + partial application and lambda abstraction. Partial application + means that a function that takes multiple arguments can be applied + to a single argument, and the result will again be a function (but + that takes one argument less). As an example, consider the following + expression, that adds one to every element of a vector: + + \begin{code} + map ((+) 1) xs + \end{code} + + Here, the expression \hs{(+) 1} is the partial application of the + plus operator to the value \hs{1}, which is again a function that + adds one to its argument. + + A labmda expression allows one to introduce an anonymous function + in any expression. Consider the following expression, which again + adds one to every element of a list: + + \begin{code} + map (\x -> x + 1) xs + \end{code} + + Finally, higher order arguments are not limited to just builtin + functions, but any function defined in \CLaSH can have function + arguments. This allows the hardware designer to use a powerful + abstraction mechanism in his designs and have an optimal amount of + code reuse. + + TODO: Describe ALU example (no code) + \subsection{State} A very important concept in hardware it the concept of state. In a stateful design, the outputs depend on the history of the inputs, or the