+ \subsection{Polymorphic functions}
+ A powerful construct in most functional language is polymorphism.
+ This means the arguments of a function (and consequentially, values
+ within the function as well) do not need to have a fixed type.
+ Haskell supports \emph{parametric polymorphism}, meaning a
+ function's type can be parameterized with another type.
+
+ As an example of a polymorphic function, consider the following
+ \hs{append} function's type:
+
+ TODO: Use vectors instead of lists?
+
+ \begin{code}
+ append :: [a] -> a -> [a]
+ \end{code}
+
+ This type is parameterized by \hs{a}, which can contain any type at
+ all. This means that append can append an element to a list,
+ regardless of the type of the elements in the list (but the element
+ added must match the elements in the list, since there is only one
+ \hs{a}).
+
+ This kind of polymorphism is extremely useful in hardware designs to
+ make operations work on a vector without knowing exactly what elements
+ are inside, routing signals without knowing exactly what kinds of
+ signals these are, or working with a vector without knowing exactly
+ how long it is. Polymorphism also plays an important role in most
+ higher order functions, as we will see in the next section.
+
+ The previous example showed unconstrained polymorphism (TODO: How is
+ this really called?): \hs{a} can have \emph{any} type. Furthermore,
+ Haskell supports limiting the types of a type parameter to specific
+ class of types. An example of such a type class is the \hs{Num}
+ class, which contains all of Haskell's numerical types.
+
+ Now, take the addition operator, which has the following type:
+
+ \begin{code}
+ (+) :: Num a => a -> a -> a
+ \end{code}
+
+ This type is again parameterized by \hs{a}, but it can only contain
+ types that are \emph{instances} of the \emph{type class} \hs{Num}.
+ Our numerical built-in types are also instances of the \hs{Num}
+ class, so we can use the addition operator on \hs{SizedWords} as
+ well as on {SizedInts}.
+
+ In \CLaSH, unconstrained polymorphism is completely supported. Any
+ function defined can have any number of unconstrained type
+ parameters. The \CLaSH compiler will infer the type of every such
+ argument depending on how the function is applied. There is one
+ exception to this: The top level function that is translated, can
+ not have any polymorphic arguments (since it is never applied, so
+ there is no way to find out the actual types for the type
+ parameters).
+
+ \CLaSH does not support user-defined type classes, but does use some
+ of the builtin ones for its builtin functions (like \hs{Num} and
+ \hs{Eq}).
+
+ \subsection{Higher order}
+ Another powerful abstraction mechanism in functional languages, is
+ the concept of \emph{higher order functions}, or \emph{functions as
+ a first class value}. This allows a function to be treated as a
+ value and be passed around, even as the argument of another
+ function. Let's clarify that with an example:
+
+ \begin{code}
+ notList xs = map not xs
+ \end{code}
+
+ This defines a function \hs{notList}, with a single list of booleans
+ \hs{xs} as an argument, which simply negates all of the booleans in
+ the list. To do this, it uses the function \hs{map}, which takes
+ \emph{another function} as its first argument and applies that other
+ function to each element in the list, returning again a list of the
+ results.
+
+ As you can see, the \hs{map} function is a higher order function,
+ since it takes another function as an argument. Also note that
+ \hs{map} is again a polymorphic function: It does not pose any
+ constraints on the type of elements in the list passed, other than
+ that it must be the same as the type of the argument the passed
+ function accepts. The type of elements in the resulting list is of
+ course equal to the return type of the function passed (which need
+ not be the same as the type of elements in the input list). Both of
+ these can be readily seen from the type of \hs{map}:
+
+ \begin{code}
+ map :: (a -> b) -> [a] -> [b]
+ \end{code}
+
+ As an example from a common hardware design, let's look at the
+ equation of a FIR filter.
+
+ \begin{equation}
+ y_t = \sum\nolimits_{i = 0}^{n - 1} {x_{t - i} \cdot h_i }
+ \end{equation}
+
+ A FIR filter multiplies fixed constants ($h$) with the current and
+ a few previous input samples ($x$). Each of these multiplications
+ are summed, to produce the result at time $t$.
+
+ This is easily and directly implemented using higher order
+ functions. Consider that the vector \hs{hs} contains the FIR
+ coefficients and the vector \hs{xs} contains the current input sample
+ in front and older samples behind. How \hs{xs} gets its value will be
+ show in the next section about state.
+
+ \begin{code}
+ fir ... = foldl1 (+) (zipwith (*) xs hs)
+ \end{code}
+
+ Here, the \hs{zipwith} function is very similar to the \hs{map}
+ function: It takes a function two lists and then applies the
+ function to each of the elements of the two lists pairwise
+ (\emph{e.g.}, \hs{zipwith (+) [1, 2] [3, 4]} becomes
+ \hs{[1 + 3, 2 + 4]}.
+
+ The \hs{foldl1} function takes a function and a single list and applies the
+ function to the first two elements of the list. It then applies to
+ function to the result of the first application and the next element
+ from the list. This continues until the end of the list is reached.
+ The result of the \hs{foldl1} function is the result of the last
+ application.
+
+ As you can see, the \hs{zipwith (*)} function is just pairwise
+ multiplication and the \hs{foldl1 (+)} function is just summation.
+
+ To make the correspondence between the code and the equation even
+ more obvious, we turn the list of input samples in the equation
+ around. So, instead of having the the input sample received at time
+ $t$ in $x_t$, $x_0$ now always stores the current sample, and $x_i$
+ stores the $ith$ previous sample. This changes the equation to the
+ following (Note that this is completely equivalent to the original
+ equation, just with a different definition of $x$ that better suits
+ the \hs{x} from the code):
+
+ \begin{equation}
+ y_t = \sum\nolimits_{i = 0}^{n - 1} {x_i \cdot h_i }
+ \end{equation}
+
+ So far, only functions have been used as higher order values. In
+ Haskell, there are two more ways to obtain a function-typed value:
+ partial application and lambda abstraction. Partial application
+ means that a function that takes multiple arguments can be applied
+ to a single argument, and the result will again be a function (but
+ that takes one argument less). As an example, consider the following
+ expression, that adds one to every element of a vector:
+
+ \begin{code}
+ map ((+) 1) xs
+ \end{code}
+
+ Here, the expression \hs{(+) 1} is the partial application of the
+ plus operator to the value \hs{1}, which is again a function that
+ adds one to its argument.
+
+ A labmda expression allows one to introduce an anonymous function
+ in any expression. Consider the following expression, which again
+ adds one to every element of a list:
+
+ \begin{code}
+ map (\x -> x + 1) xs
+ \end{code}
+
+ Finally, higher order arguments are not limited to just builtin
+ functions, but any function defined in \CLaSH can have function
+ arguments. This allows the hardware designer to use a powerful
+ abstraction mechanism in his designs and have an optimal amount of
+ code reuse.
+
+ TODO: Describe ALU example (no code)
+
+ \subsection{State}
+ A very important concept in hardware it the concept of state. In a
+ stateful design, the outputs depend on the history of the inputs, or the
+ state. State is usually stored in registers, which retain their value
+ during a clock cycle. As we want to describe more than simple
+ combinatorial designs, \CLaSH\ needs an abstraction mechanism for state.
+
+ An important property in Haskell, and in most other functional languages,
+ is \emph{purity}. A function is said to be \emph{pure} if it satisfies two
+ conditions:
+ \begin{inparaenum}
+ \item given the same arguments twice, it should return the same value in
+ both cases, and
+ \item when the function is called, it should not have observable
+ side-effects.
+ \end{inparaenum}
+ This purity property is important for functional languages, since it
+ enables all kinds of mathematical reasoning that could not be guaranteed
+ correct for impure functions. Pure functions are as such a perfect match
+ for a combinatorial circuit, where the output solely depends on the
+ inputs. When a circuit has state however, it can no longer be simply
+ described by a pure function. Simply removing the purity property is not a
+ valid option, as the language would then lose many of it mathematical
+ properties. In an effort to include the concept of state in pure
+ functions, the current value of the state is made an argument of the
+ function; the updated state becomes part of the result.
+
+ A simple example is the description of an accumulator circuit:
+ \begin{code}
+ acc :: Word -> State Word -> (State Word, Word)
+ acc inp (State s) = (State s', outp)
+ where
+ outp = s + inp
+ s' = outp
+ \end{code}
+ This approach makes the state of a function very explicit: which variables
+ are part of the state is completely determined by the type signature. This
+ approach to state is well suited to be used in combination with the
+ existing code and language features, such as all the choice constructs, as
+ state values are just normal values.