1 \chapter[chap:description]{Hardware description}
2 This chapter will provide an overview of the hardware description language
3 that was created and the issues that have arisen in the process. It will
4 focus on the issues of the language, not the implementation.
6 When translating Haskell to hardware, we need to make choices in what kind
7 of hardware to generate for what Haskell constructs. When faced with
8 choices, we've tried to stick with the most obvious choice wherever
9 possible. In a lot of cases, when you look at a hardware description it is
10 comletely clear what hardware is described. We want our translator to
11 generate exactly that hardware whenever possible, to minimize the amount of
12 surprise for people working with it.
14 In this chapter we try to describe how we interpret a Haskell program from a
15 hardware perspective. We provide a description of each Haskell language
16 element that needs translation, to provide a clear picture of what is
19 \section{Function application}
20 The basic syntactic element of a functional program are functions and
21 function application. These have a single obvious \small{VHDL} translation: Each
22 function becomes a hardware component, where each argument is an input port
23 and the result value is the output port.
25 Each function application in turn becomes component instantiation. Here, the
26 result of each argument expression is assigned to a signal, which is mapped
27 to the corresponding input port. The output port of the function is also
28 mapped to a signal, which is used as the result of the application.
30 \in{Example}[ex:And3] shows a simple program using only function
31 application and the corresponding architecture.
34 -- | A simple function that returns
35 -- the and of three bits
36 and3 :: Bit -> Bit -> Bit -> Bit
37 and3 a b c = and (and a b) c
40 \startuseMPgraphic{And3}
41 save a, b, c, anda, andb, out;
44 newCircle.a(btex $a$ etex) "framed(false)";
45 newCircle.b(btex $b$ etex) "framed(false)";
46 newCircle.c(btex $c$ etex) "framed(false)";
47 newCircle.out(btex $out$ etex) "framed(false)";
50 newCircle.anda(btex $and$ etex);
51 newCircle.andb(btex $and$ etex);
54 b.c = a.c + (0cm, 1cm);
55 c.c = b.c + (0cm, 1cm);
56 anda.c = midpoint(a.c, b.c) + (2cm, 0cm);
57 andb.c = midpoint(b.c, c.c) + (4cm, 0cm);
59 out.c = andb.c + (2cm, 0cm);
61 % Draw objects and lines
62 drawObj(a, b, c, anda, andb, out);
64 ncarc(a)(anda) "arcangle(-10)";
71 \placeexample[here][ex:And3]{Simple three port and.}
72 \startcombination[2*1]
73 {\typebufferhs{And3}}{Haskell description using function applications.}
74 {\boxedgraphic{And3}}{The architecture described by the Haskell description.}
77 TODO: Define top level function and subfunctions/circuits.
79 \subsection{Partial application}
80 It should be obvious that we cannot generate hardware signals for all
81 expressions we can express in Haskell. The most obvious criterium for this
82 is the type of an expression. We will see more of this below, but for now it
83 should be obvious that any expression of a function type cannot be
84 represented as a signal or i/o port to a component.
86 From this, we can see that the above translation rules do not apply to a
87 partial application. \in{Example}[ex:Quadruple] shows an example use of
88 partial application and the corresponding architecture.
90 \startbuffer[Quadruple]
91 -- | Multiply the input word by four.
92 quadruple :: Word -> Word
93 quadruple n = mul (mul n)
98 \startuseMPgraphic{Quadruple}
99 save in, two, mula, mulb, out;
102 newCircle.in(btex $n$ etex) "framed(false)";
103 newCircle.two(btex $2$ etex) "framed(false)";
104 newCircle.out(btex $out$ etex) "framed(false)";
107 newCircle.mula(btex $\times$ etex);
108 newCircle.mulb(btex $\times$ etex);
111 in.c = two.c + (0cm, 1cm);
112 mula.c = in.c + (2cm, 0cm);
113 mulb.c = mula.c + (2cm, 0cm);
114 out.c = mulb.c + (2cm, 0cm);
116 % Draw objects and lines
117 drawObj(in, two, mula, mulb, out);
119 nccurve(two)(mula) "angleA(0)", "angleB(45)";
120 nccurve(two)(mulb) "angleA(0)", "angleB(45)";
126 \placeexample[here][ex:Quadruple]{Simple three port and.}
127 \startcombination[2*1]
128 {\typebufferhs{Quadruple}}{Haskell description using function applications.}
129 {\boxedgraphic{Quadruple}}{The architecture described by the Haskell description.}
132 Here, the definition of mul is a partial function application: It applies
133 \hs{2 :: Word} to the function \hs{(*) :: Word -> Word -> Word} resulting in
134 the expression \hs{(*) 2 :: Word -> Word}. Since this resulting expression
135 is again a function, we can't generate hardware for it directly. This is
136 because the hardware to generate for \hs{mul} depends completely on where
137 and how it is used. In this example, it is even used twice!
139 However, it is clear that the above hardware description actually describes
140 valid hardware. In general, we can see that any partial applied function
141 must eventually become completely applied, at which point we can generate
142 hardware for it using the rules for function application above. It might
143 mean that a partial application is passed around quite a bit (even beyond
144 function boundaries), but eventually, the partial application will become
148 A very important concept in hardware designs is \emph{state}. In a
149 stateless (or, \emph{combinatoric}) design, every output is a directly and solely dependent on the
150 inputs. In a stateful design, the outputs can depend on the history of
151 inputs, or the \emph{state}. State is usually stored in \emph{registers},
152 which retain their value during a clockcycle, and are typically updated at
153 the start of every clockcycle. Since the updating of the state is tightly
154 coupled (synchronized) to the clock signal, these state updates are often
155 called \emph{synchronous}.
157 To make our hardware description language useful to describe more that
158 simple combinatoric designs, we'll need to be able to describe state in
161 \subsection{Approaches to state}
162 In Haskell, functions are always pure (except when using unsafe
163 functions like \hs{unsafePerformIO}, which should be prevented whenever
164 possible). This means that the output of a function solely depends on
165 its inputs. If you evaluate a given function with given inputs, it will
166 always provide the same output.
170 This is a perfect match for a combinatoric circuit, where the output
171 also soley depend on the inputs. However, when state is involved, this
172 no longer holds. Since we're in charge of our own language, we could
173 remove this purity constraint and allow a function to return different
174 values depending on the cycle in which it is evaluated (or rather, the
175 current state). However, this means that all kinds of interesting
176 properties of our functional language get lost, and all kinds of
177 transformations and optimizations might no longer be meaning preserving.
179 Provided that we want to keep the function pure, the current state has
180 to be present in the function's arguments in some way. There seem to be
181 two obvious ways to do this: Adding the current state as an argument, or
182 including the full history of each argument.
184 \subsubsection{Stream arguments and results}
185 Including the entire history of each input (\eg, the value of that
186 input for each previous clockcycle) is an obvious way to make outputs
187 depend on all previous input. This is easily done by making every
188 input a list instead of a single value, containing all previous values
189 as well as the current value.
191 An obvious downside of this solution is that on each cycle, all the
192 previous cycles must be resimulated to obtain the current state. To do
193 this, it might be needed to have a recursive helper function as well,
194 wich might be hard to properly analyze by the compiler.
196 A slight variation on this approach is one taken by some of the other
197 functional \small{HDL}s in the field (TODO: References to Lava,
198 ForSyDe, ...): Make functions operate on complete streams. This means
199 that a function is no longer called on every cycle, but just once. It
200 takes stream as inputs instead of values, where each stream contains
201 all the values for every clockcycle since system start. This is easily
202 modeled using an (infinite) list, with one element for each clock
203 cycle. Since the funciton is only evaluated once, its output is also a
204 stream. Note that, since we are working with infinite lists and still
205 want to be able to simulate the system cycle-by-cycle, this relies
206 heavily on the lazy semantics of Haskell.
208 Since our inputs and outputs are streams, all other (intermediate)
209 values must be streams. All of our primitive operators (\eg, addition,
210 substraction, bitwise operations, etc.) must operate on streams as
211 well (note that changing a single-element operation to a stream
212 operation can done with \hs{map}, \hs{zipwith}, etc.).
214 Note that the concept of \emph{state} is no more than having some way
215 to communicate a value from one cycle to the next. By introducing a
216 \hs{delay} function, we can do exactly that: Delay (each value in) a
217 stream so that we can "look into" the past. This \hs{delay} function
218 simply outputs a stream where each value is the same as the input
219 value, but shifted one cycle. This causes a \quote{gap} at the
220 beginning of the stream: What is the value of the delay output in the
221 first cycle? For this, the \hs{delay} function has a second input
222 (which is a value, not a stream!).
224 \in{Example}[ex:DelayAcc] shows a simple accumulator expressed in this
227 \startbuffer[DelayAcc]
228 acc :: Stream Word -> Stream Word
231 out = (delay out 0) + in
234 \startuseMPgraphic{DelayAcc}
235 save in, out, add, reg;
238 newCircle.in(btex $in$ etex) "framed(false)";
239 newCircle.out(btex $out$ etex) "framed(false)";
242 newReg.reg("") "dx(4mm)", "dy(6mm)", "reflect(true)";
243 newCircle.add(btex + etex);
246 add.c = in.c + (2cm, 0cm);
247 out.c = add.c + (2cm, 0cm);
248 reg.c = add.c + (0cm, 2cm);
250 % Draw objects and lines
251 drawObj(in, out, add, reg);
253 nccurve(add)(reg) "angleA(0)", "angleB(180)", "posB(d)";
254 nccurve(reg)(add) "angleA(180)", "angleB(-45)", "posA(out)";
260 \placeexample[here][ex:DelayAcc]{Simple accumulator architecture.}
261 \startcombination[2*1]
262 {\typebufferhs{DelayAcc}}{Haskell description using streams.}
263 {\boxedgraphic{DelayAcc}}{The architecture described by the Haskell description.}
267 This notation can be confusing (especially due to the loop in the
268 definition of out), but is essentially easy to interpret. There is a
269 single call to delay, resulting in a circuit with a single register,
270 whose input is connected to \hs{outl (which is the output of the
271 adder)}, and it's output is the \hs{delay out 0} (which is connected
272 to one of the adder inputs).
274 This notation has a number of downsides, amongst which are limited
275 readability and ambiguity in the interpretation. TODO: Reference
278 \subsubsection{Explicit state arguments and results}
279 A more explicit way to model state, is to simply add an extra argument
280 containing the current state value. This allows an output to depend on
281 both the inputs as well as the current state while keeping the
282 function pure (letting the result depend only on the arguments), since
283 the current state is now an argument.
285 In Haskell, this would look like \in{example}[ex:ExplicitAcc].
287 \startbuffer[ExplicitAcc]
288 -- input -> current state -> (new state, output)
289 acc :: Word -> Word -> (Word, Word)
290 acc in (State s) = (State s', out)
296 \placeexample[here][ex:ExplicitAcc]{Simple accumulator architecture.}
297 \startcombination[2*1]
298 {\typebufferhs{ExplicitAcc}}{Haskell description using explicit state arguments.}
299 % Picture is identical to the one we had just now.
300 {\boxedgraphic{DelayAcc}}{The architecture described by the Haskell description.}
303 This approach makes a function's state very explicit, which state
304 variables are used by a function can be completely determined from its
305 type signature (as opposed to the stream approach, where a function
306 looks the same from the outside, regardless of what state variables it
307 uses (or wether it's stateful at all).
309 This approach is the one chosen for Cλash and will be examined more
312 \subsection{Explicit state specification}
313 We've seen the concept of explicit state in a simple example below, but
314 what are the implications of this approach?
316 \subsubsection{Substates}
317 Since a function's state is reflected directly in its type signature,
318 if a function calls other stateful functions (\eg, has subcircuits) it
319 has to somehow know the current state for these called functions. The
320 only way to do this, is to put these \emph{substates} inside the
321 caller's state. This means that a function's state is the sum of the
322 states of all functions it calls, and its own state.
324 This also means that the type of a function (at least the "state"
325 part) is dependent on its implementation and the functions it calls.
326 This is the major downside of this approach: The separation between
327 interface and implementation is limited. However, since Cλash is not
328 very suitable for separate compilation (see
329 \in{section}[sec:prototype:separate]) this is not a big problem in
330 practice. Additionally, when using a type synonym for the state type
331 of each function, we can still provide explicit type signatures
332 while keeping the state specification for a function near its
336 We need some way to know which arguments should become input ports and
337 which argument(s?) should become the current state (\eg, be bound to
338 the register outputs). This does not hold holds not just for the top
339 level function, but also for any subfunctions. Or could we perhaps
340 deduce the statefulness of subfunctions by analyzing the flow of data
341 in the calling functions?
343 To explore this matter, we make an interesting observation: We get
344 completely correct behaviour when we put all state registers in the
345 top level entity (or even outside of it). All of the state arguments
346 and results on subfunctions are treated as normal input and output
347 ports. Effectively, a stateful function results in a stateless
348 hardware component that has one of its input ports connected to the
349 output of a register and one of its output ports connected to the
350 input of the same register.
354 Of course, even though the hardware described like this has the
355 correct behaviour, unless the layout tool does smart optimizations,
356 there will be a lot of extra wire in the design (since registers will
357 not be close to the component that uses them). Also, when working with
358 the generated \small{VHDL} code, there will be a lot of extra ports
359 just to pass one state values, which can get quite confusing.
361 To fix this, we can simply \quote{push} the registers down into the
362 subcircuits. When we see a register that is connected directly to a
363 subcircuit, we remove the corresponding input and output port and put
364 the register inside the subcircuit instead. This is slightly less
365 trivial when looking at the Haskell code instead of the resulting
366 circuit, but the idea is still the same.
370 However, when applying this technique, we might push registers down
371 too far. When you intend to store a result of a stateless subfunction
372 in the caller's state and pass the current value of that state
373 variable to that same function, the register might get pushed down too
374 far. It is impossible to distinguish this case from similar code where
375 the called function is in fact stateful. From this we can conclude
376 that we have to either:
379 \item accept that the generated hardware might not be exactly what we
380 intended, in some specific cases. In most cases, the hardware will be
382 \item explicitely annotate state arguments and results in the input
386 The first option causes (non-obvious) exceptions in the language
387 intepretation. Also, automatically determining where registers should
388 end up is easier to implement correctly with explicit annotations, so
389 for these reasons we will look at how this annotations could work.
392 TODO: Note about conditions on state variables and checking them.
394 \subsection{Explicit state annotation}
395 To make our stateful descriptions unambigious and easier to translate,
396 we need some way for the developer to describe which arguments and
397 results are intended to become stateful.
399 Roughly, we have two ways to achieve this:
401 \item Use some kind of annotation method or syntactic construction in
402 the language to indicate exactly which argument and (part of the)
403 result is stateful. This means that the annotation lives
404 \quote{outside} of the function, it is completely invisible when
405 looking at the function body.
406 \item Use some kind of annotation on the type level, \eg give stateful
407 arguments and (part of) results a different type. This has the
408 potential to make this annotation visible inside the function as well,
409 such that when looking at a value inside the function body you can
410 tell if it's stateful by looking at its type. This could possibly make
411 the translation process a lot easier, since less analysis of the
412 program flow might be required.
415 From these approaches, the type level \quote{annotations} have been
416 implemented in Cλash. \in{Section}[sec:prototype:statetype] expands on
417 the possible ways this could have been implemented.
419 \section[sec:recursion]{Recursion}
420 An import concept in functional languages is recursion. In it's most basic
421 form, recursion is a function that is defined in terms of itself. This
422 usually requires multiple evaluations of this function, with changing
423 arguments, until eventually an evaluation of the function no longer requires
426 Recursion in a hardware description is a bit of a funny thing. Usually,
427 recursion is associated with a lot of nondeterminism, stack overflows, but
428 also flexibility and expressive power.
430 Given the notion that each function application will translate to a
431 component instantiation, we are presented with a problem. A recursive
432 function would translate to a component that contains itself. Or, more
433 precisely, that contains an instance of itself. This instance would again
434 contain an instance of itself, and again, into infinity. This is obviously a
435 problem for generating hardware.
437 This is expected for functions that describe infinite recursion. In that
438 case, we can't generate hardware that shows correct behaviour in a single
439 cycle (at best, we could generate hardware that needs an infinite number of
442 However, most recursive hardware descriptions will describe finite
443 recursion. This is because the recursive call is done conditionally. There
444 is usually a case statement where at least one alternative does not contain
445 the recursive call, which we call the "base case". If, for each call to the
446 recursive function, we would be able to detect which alternative applies,
447 we would be able to remove the case expression and leave only the base case
448 when it applies. This will ensure that expanding the recursive functions
449 will terminate after a bounded number of expansions.
451 This does imply the extra requirement that the base case is detectable at
452 compile time. In particular, this means that the decision between the base
453 case and the recursive case must not depend on runtime data.
455 \subsection{List recursion}
456 The most common deciding factor in recursion is the length of a list that is
457 passed in as an argument. Since we represent lists as vectors that encode
458 the length in the vector type, it seems easy to determine the base case. We
459 can simply look at the argument type for this. However, it turns out that
460 this is rather non-trivial to write down in Haskell in the first place. As
461 an example, we would like to write down something like this:
464 sum :: Vector n Word -> Word
465 sum xs = case null xs of
467 False -> head xs + sum (tail xs)
470 However, the typechecker will now use the following reasoning (element type
471 of the vector is left out):
474 \item tail has the type \hs{(n > 0) => Vector n -> Vector (n - 1)}
475 \item This means that xs must have the type \hs{(n > 0) => Vector n}
476 \item This means that sum must have the type \hs{(n > 0) => Vector n -> a}
477 \item sum is called with the result of tail as an argument, which has the
478 type \hs{Vector n} (since \hs{(n > 0) => n - 1 == m}).
479 \item This means that sum must have the type \hs{Vector n -> a}
480 \item This is a contradiction between the type deduced from the body of sum
481 (the input vector must be non-empty) and the use of sum (the input vector
482 could have any length).
485 As you can see, using a simple case at value level causes the type checker
486 to always typecheck both alternatives, which can't be done! This is a
487 fundamental problem, that would seem perfectly suited for a type class.
488 Considering that we need to switch between to implementations of the sum
489 function, based on the type of the argument, this sounds like the perfect
490 problem to solve with a type class. However, this approach has its own
491 problems (not the least of them that you need to define a new typeclass for
492 every recursive function you want to define).
494 Another approach tried involved using GADTs to be able to do pattern
495 matching on empty / non empty lists. While this worked partially, it also
496 created problems with more complex expressions.
498 TODO: How much detail should there be here? I can probably refer to
501 Evaluating all possible (and non-possible) ways to add recursion to our
502 descriptions, it seems better to leave out list recursion alltogether. This
503 allows us to focus on other interesting areas instead. By including
504 (builtin) support for a number of higher order functions like map and fold,
505 we can still express most of the things we would use list recursion for.
507 \subsection{General recursion}
508 Of course there are other forms of recursion, that do not depend on the
509 length (and thus type) of a list. For example, simple recursion using a
510 counter could be expressed, but only translated to hardware for a fixed
511 number of iterations. Also, this would require extensive support for compile
512 time simplification (constant propagation) and compile time evaluation
513 (evaluation constant comparisons), to ensure non-termination. Even then, it
514 is hard to really guarantee termination, since the user (or GHC desugarer)
515 might use some obscure notation that results in a corner case of the
516 simplifier that is not caught and thus non-termination.
518 Due to these complications, we leave other forms of recursion as