1 \chapter[chap:description]{Hardware description}
2 This chapter will provide an overview of the hardware description language
3 that was created and the issues that have arisen in the process. It will
4 focus on the issues of the language, not the implementation. The prototype
5 implementation will be discussed in \in{chapter}[chap:prototype].
7 \todo{Shortshort introduction to Cλash (Bit, Word, and, not, etc.)}
9 When translating Haskell to hardware, we need to make choices about what kind
10 of hardware to generate for which Haskell constructs. When faced with
11 choices, we've tried to stick with the most obvious choice wherever
12 possible. In a lot of cases, when you look at a hardware description it is
13 comletely clear what hardware is described. We want our translator to
14 generate exactly that hardware whenever possible, to make working with Cλash
15 as intuitive as possible.
17 In this chapter we try to describe how we interpret a Haskell program from a
18 hardware perspective. We provide a description of each Haskell language
19 element that needs translation, to provide a clear picture of what is
22 \section[sec:description:application]{Function application}
23 \todo{Sidenote: Inputs vs arguments}
24 The basic syntactic element of a functional program are functions and
25 function application. These have a single obvious \small{VHDL} translation: Each
26 function becomes a hardware component, where each argument is an input port
27 and the result value is the (single) output port. This output port can have
28 a complex type (such as a tuple), so having just a single output port does
29 not pose a limitation.
31 Each function application in turn becomes component instantiation. Here, the
32 result of each argument expression is assigned to a signal, which is mapped
33 to the corresponding input port. The output port of the function is also
34 mapped to a signal, which is used as the result of the application.
36 \in{Example}[ex:And3] shows a simple program using only function
37 application and the corresponding architecture.
40 -- | A simple function that returns
41 -- conjunction of three bits
42 and3 :: Bit -> Bit -> Bit -> Bit
43 and3 a b c = and (and a b) c
46 \todo{Mirror this image vertically}
47 \startuseMPgraphic{And3}
48 save a, b, c, anda, andb, out;
51 newCircle.a(btex $a$ etex) "framed(false)";
52 newCircle.b(btex $b$ etex) "framed(false)";
53 newCircle.c(btex $c$ etex) "framed(false)";
54 newCircle.out(btex $out$ etex) "framed(false)";
57 newCircle.anda(btex $and$ etex);
58 newCircle.andb(btex $and$ etex);
61 b.c = a.c + (0cm, 1cm);
62 c.c = b.c + (0cm, 1cm);
63 anda.c = midpoint(a.c, b.c) + (2cm, 0cm);
64 andb.c = midpoint(b.c, c.c) + (4cm, 0cm);
66 out.c = andb.c + (2cm, 0cm);
68 % Draw objects and lines
69 drawObj(a, b, c, anda, andb, out);
71 ncarc(a)(anda) "arcangle(-10)";
78 \placeexample[here][ex:And3]{Simple three input and gate.}
79 \startcombination[2*1]
80 {\typebufferhs{And3}}{Haskell description using function applications.}
81 {\boxedgraphic{And3}}{The architecture described by the Haskell description.}
84 \note{This section should also mention hierarchy, top level functions and
88 Although describing components and connections allows us to describe a lot of
89 hardware designs already, there is an obvious thing missing: Choice. We
90 need some way to be able to choose between values based on another value.
91 In Haskell, choice is achieved by \hs{case} expressions, \hs{if} expressions or
94 An obvious way to add choice to our language without having to recognize
95 any of Haskell's syntax, would be to add a primivite \quote{\hs{if}}
96 function. This function would take three arguments: The condition, the
97 value to return when the condition is true and the value to return when
98 the condition is false.
100 This \hs{if} function would then essentially describe a multiplexer and
101 allows us to describe any architecture that uses multiplexers. \fxnote{Are
102 there other mechanisms of choice in hardware?}
104 However, to be able to describe our hardware in a more convenient way, we
105 also want to translate Haskell's choice mechanisms. The easiest of these
106 are of course case expressions (and \hs{if} expressions, which can be very
107 directly translated to \hs{case} expressions). A \hs{case} expression can in turn
108 simply be translated to a conditional assignment, where the conditions use
109 equality comparisons against the constructors in the \hs{case} expressions.
111 \todo{Assignment vs multiplexers}
113 In \in{example}[ex:CaseInv] a simple \hs{case} expression is shown,
114 scrutinizing a boolean value. The corresponding architecture has a
115 comparator to determine which of the constructors is on the \hs{in}
116 input. There is a multiplexer to select the output signal. The two options
117 for the output signals are just constants, but these could have been more
118 complex expressions (in which case also both of them would be working in
119 parallel, regardless of which output would be chosen eventually).
121 If we would translate a Boolean to a bit value, we could of course remove
122 the comparator and directly feed 'in' into the multiplex (or even use an
123 inverter instead of a multiplexer). However, we will try to make a
124 general translation, which works for all possible \hs{case} expressions.
125 Optimizations such as these are left for the \VHDL synthesizer, which
126 handles them very well.
128 \todo{Be more explicit about >2 alternatives}
130 \startbuffer[CaseInv]
137 \startuseMPgraphic{CaseInv}
138 save in, truecmp, falseout, trueout, out, cmp, mux;
141 newCircle.in(btex $in$ etex) "framed(false)";
142 newCircle.out(btex $out$ etex) "framed(false)";
144 newBox.truecmp(btex $True$ etex) "framed(false)";
145 newBox.trueout(btex $True$ etex) "framed(false)";
146 newBox.falseout(btex $False$ etex) "framed(false)";
149 newCircle.cmp(btex $==$ etex);
153 cmp.c = in.c + (3cm, 0cm);
154 truecmp.c = cmp.c + (-1cm, 1cm);
155 mux.sel = cmp.e + (1cm, -1cm);
156 falseout.c = mux.inpa - (2cm, 0cm);
157 trueout.c = mux.inpb - (2cm, 0cm);
158 out.c = mux.out + (2cm, 0cm);
160 % Draw objects and lines
161 drawObj(in, out, truecmp, trueout, falseout, cmp, mux);
165 nccurve(cmp.e)(mux.sel) "angleA(0)", "angleB(-90)";
166 ncline(falseout)(mux) "posB(inpa)";
167 ncline(trueout)(mux) "posB(inpb)";
168 ncline(mux)(out) "posA(out)";
171 \placeexample[here][ex:CaseInv]{Simple inverter.}
172 \startcombination[2*1]
173 {\typebufferhs{CaseInv}}{Haskell description using a Case expression.}
174 {\boxedgraphic{CaseInv}}{The architecture described by the Haskell description.}
177 A slightly more complex (but very powerful) form of choice is pattern
178 matching. A function can be defined in multiple clauses, where each clause
179 specifies a pattern. When the arguments match the pattern, the
180 corresponding clause will be used.
182 \startbuffer[PatternInv]
188 \placeexample[here][ex:PatternInv]{Simple inverter using pattern matching.
189 Describes the same architecture as \in{example}[ex:CaseInv].}
190 {\typebufferhs{PatternInv}}
192 The architecture described by \in{example}[ex:PatternInv] is of course the
193 same one as the one in \in{example}[ex:CaseInv]. The general interpretation
194 of pattern matching is also similar to that of \hs{case} expressions: Generate
195 hardware for each of the clauses (like each of the clauses of a \hs{case}
196 expression) and connect them to the function output through (a number of
197 nested) multiplexers. These multiplexers are driven by comparators and
198 other logic, that check each pattern in turn.
201 We've seen the two most basic functional concepts translated to hardware:
202 Function application and choice. Before we look further into less obvious
203 concepts like higher order expressions and polymorphism, we will have a
204 look at the types of the values we use in our descriptions.
206 When working with values in our descriptions, we'll also need to provide
207 some way to translate the values used to hardware equivalents. In
208 particular, this means having to come up with a hardware equivalent for
209 every \emph{type} used in our program.
211 Since most functional languages have a lot of standard types that are
212 hard to translate (integers without a fixed size, lists without a static
213 length, etc.), we will start out by defining a number of \quote{builtin}
214 types ourselves. These types are builtin in the sense that our compiler
215 will have a fixed VHDL type for these. User defined types, on the other
216 hand, will have their hardware type derived directly from their Haskell
217 declaration automatically, according to the rules we sketch here.
219 \todo{Introduce Haskell type syntax (type constructors, type application,
222 \subsection{Builtin types}
223 The language currently supports the following builtin types. Of these,
224 only the \hs{Bool} type is supported by Haskell out of the box (the
225 others are defined by the Cλash package, so they are user-defined types
226 from Haskell's point of view).
229 This is the most basic type available. It is mapped directly onto
230 the \type{std_logic} \small{VHDL} type. Mapping this to the
231 \type{bit} type might make more sense (since the Haskell version
232 only has two values), but using \type{std_logic} is more standard
233 (and allowed for some experimentation with don't care values)
235 \todo{Sidenote bit vs stdlogic}
237 \startdesc{\hs{Bool}}
238 This is the only builtin Haskell type supported and is translated
239 exactly like the Bit type (where a value of \hs{True} corresponds to a
240 value of \hs{High}). Supporting the Bool type is particularly
241 useful to support \hs{if ... then ... else ...} expressions, which
242 always have a \hs{Bool} value for the condition.
244 A \hs{Bool} is translated to a \type{std_logic}, just like \hs{Bit}.
246 \startdesc{\hs{SizedWord}, \hs{SizedInt}}
247 These are types to represent integers. A \hs{SizedWord} is unsigned,
248 while a \hs{SizedInt} is signed. These types are parameterized by a
249 length type, so you can define an unsigned word of 32 bits wide as
253 type Word32 = SizedWord D32
256 Here, a type synonym \hs{Word32} is defined that is equal to the
257 \hs{SizedWord} type constructor applied to the type \hs{D32}. \hs{D32}
258 is the \emph{type level representation} of the decimal number 32,
259 making the \hs{Word32} type a 32-bit unsigned word.
261 These types are translated to the \small{VHDL} \type{unsigned} and
262 \type{signed} respectively.
263 \todo{Sidenote on dependent typing?}
265 \startdesc{\hs{Vector}}
266 This is a vector type, that can contain elements of any other type and
267 has a fixed length. It has two type parameters: Its
268 length and the type of the elements contained in it. By putting the
269 length parameter in the type, the length of a vector can be determined
270 at compile time, instead of only at runtime for conventional lists.
272 The \hs{Vector} type constructor takes two type arguments: The length
273 of the vector and the type of the elements contained in it. The state
274 type of an 8 element register bank would then for example be:
277 type RegisterState = Vector D8 Word32
280 Here, a type synonym \hs{RegisterState} is defined that is equal to
281 the \hs{Vector} type constructor applied to the types \hs{D8} (The type
282 level representation of the decimal number 8) and \hs{Word32} (The 32
283 bit word type as defined above). In other words, the
284 \hs{RegisterState} type is a vector of 8 32-bit words.
286 A fixed size vector is translated to a \small{VHDL} array type.
288 \startdesc{\hs{RangedWord}}
289 This is another type to describe integers, but unlike the previous
290 two it has no specific bitwidth, but an upper bound. This means that
291 its range is not limited to powers of two, but can be any number.
292 A \hs{RangedWord} only has an upper bound, its lower bound is
293 implicitly zero. There is a lot of added implementation complexity
294 when adding a lower bound and having just an upper bound was enough
295 for the primary purpose of this type: Typesafely indexing vectors.
297 To define an index for the 8 element vector above, we would do:
300 type RegisterIndex = RangedWord D7
303 Here, a type synonym \hs{RegisterIndex} is defined that is equal to
304 the \hs{RangedWord} type constructor applied to the type \hs{D7}. In
305 other words, this defines an unsigned word with values from 0 to 7
306 (inclusive). This word can be be used to index the 8 element vector
307 \hs{RegisterState} above.
309 This type is translated to the \type{unsigned} \small{VHDL} type.
311 \fxnote{There should be a reference to Christiaan's work here.}
313 \subsection{User-defined types}
314 There are three ways to define new types in Haskell: Algebraic
315 datatypes with the \hs{data} keyword, type synonyms with the \hs{type}
316 keyword and type renamings with the \hs{newtype} keyword. \GHC
317 offers a few more advanced ways to introduce types (type families,
318 existential typing, \small{GADT}s, etc.) which are not standard
319 Haskell. These will be left outside the scope of this research.
321 Only an algebraic datatype declaration actually introduces a
322 completely new type, for which we provide the \VHDL translation
323 below. Type synonyms and renamings only define new names for
324 existing types (where synonyms are completely interchangeable and
325 renamings need explicit conversion). Therefore, these don't need
326 any particular \VHDL translation, a synonym or renamed type will
327 just use the same representation as the original type. The
328 distinction between a renaming and a synonym does no longer matter
329 in hardware and can be disregarded in the generated \VHDL.
331 For algebraic types, we can make the following distinction:
333 \startdesc{Product types}
334 A product type is an algebraic datatype with a single constructor with
335 two or more fields, denoted in practice like (a,b), (a,b,c), etc. This
336 is essentially a way to pack a few values together in a record-like
337 structure. In fact, the builtin tuple types are just algebraic product
338 types (and are thus supported in exactly the same way).
340 The \quote{product} in its name refers to the collection of values belonging
341 to this type. The collection for a product type is the cartesian
342 product of the collections for the types of its fields.
344 These types are translated to \VHDL record types, with one field for
345 every field in the constructor. This translation applies to all single
346 constructor algebraic datatypes, including those with just one
347 field (which are technically not a product, but generate a VHDL
348 record for implementation simplicity).
350 \startdesc{Enumerated types}
351 \defref{enumerated types}
352 An enumerated type is an algebraic datatype with multiple constructors, but
353 none of them have fields. This is essentially a way to get an
354 enum-like type containing alternatives.
356 Note that Haskell's \hs{Bool} type is also defined as an
357 enumeration type, but we have a fixed translation for that.
359 These types are translated to \VHDL enumerations, with one value for
360 each constructor. This allows references to these constructors to be
361 translated to the corresponding enumeration value.
363 \startdesc{Sum types}
364 A sum type is an algebraic datatype with multiple constructors, where
365 the constructors have one or more fields. Technically, a type with
366 more than one field per constructor is a sum of products type, but
367 for our purposes this distinction does not really make a difference,
368 so we'll leave it out.
370 The \quote{sum} in its name refers again to the collection of values
371 belonging to this type. The collection for a sum type is the
372 union of the the collections for each of the constructors.
374 Sum types are currently not supported by the prototype, since there is
375 no obvious \VHDL alternative. They can easily be emulated, however, as
376 we will see from an example:
379 data Sum = A Bit Word | B Word
382 An obvious way to translate this would be to create an enumeration to
383 distinguish the constructors and then create a big record that
384 contains all the fields of all the constructors. This is the same
385 translation that would result from the following enumeration and
386 product type (using a tuple for clarity):
390 type Sum = (SumC, Bit, Word, Word)
393 Here, the \hs{SumC} type effectively signals which of the latter three
394 fields of the \hs{Sum} type are valid (the first two if \hs{A}, the
395 last one if \hs{B}), all the other ones have no useful value.
397 An obvious problem with this naive approach is the space usage: The
398 example above generates a fairly big \VHDL type. However, we can be
399 sure that the two \hs{Word}s in the \hs{Sum} type will never be valid
400 at the same time, so this is a waste of space.
402 Obviously, we could do some duplication detection here to reuse a
403 particular field for another constructor, but this would only
404 partially solve the problem. If I would, for example, have an array of
405 8 bits and a 8 bit unsiged word, these are different types and could
406 not be shared. However, in the final hardware, both of these types
407 would simply be 8 bit connections, so we have a 100\% size increase by
411 Another interesting case is that of recursive types. In Haskell, an
412 algebraic datatype can be recursive: Any of its field types can be (or
413 contain) the type being defined. The most well-known recursive type is
414 probably the list type, which is defined is:
417 data List t = Empty | Cons t (List t)
420 Note that \hs{Empty} is usually written as \hs{[]} and \hs{Cons} as
421 \hs{:}, but this would make the definition harder to read. This
422 immediately shows the problem with recursive types: What hardware type
425 If we would use the naive approach for sum types we described above, we
426 would create a record where the first field is an enumeration to
427 distinguish \hs{Empty} from \hs{Cons}. Furthermore, we would add two
428 more fields: One with the (\VHDL equivalent of) type \hs{t} (assuming we
429 actually know what type this is at compile time, this should not be a
430 problem) and a second one with type \hs{List t}. The latter one is of
431 course a problem: This is exactly the type we were trying to translate
434 Our \VHDL type will thus become infinitely deep. In other words, there
435 is no way to statically determine how long (deep) the list will be
436 (it could even be infinite).
438 In general, we can say we can never properly translate recursive types:
439 All recursive types have a potentially infinite value (even though in
440 practice they will have a bounded value, there is no way for the
441 compiler to determine an upper bound on its size).
443 \subsection{Partial application}
444 Now we've seen how to to translate application, choice and types, we will
445 get to a more complex concept: Partial applications. A \emph{partial
446 application} is any application whose (return) type is (again) a function
449 From this, we can see that the translation rules for full application do not
450 apply to a partial application. \in{Example}[ex:Quadruple] shows an example
451 use of partial application and the corresponding architecture.
453 \startbuffer[Quadruple]
454 -- | Multiply the input word by four.
455 quadruple :: Word -> Word
456 quadruple n = mul (mul n)
461 \startuseMPgraphic{Quadruple}
462 save in, two, mula, mulb, out;
465 newCircle.in(btex $n$ etex) "framed(false)";
466 newCircle.two(btex $2$ etex) "framed(false)";
467 newCircle.out(btex $out$ etex) "framed(false)";
470 newCircle.mula(btex $\times$ etex);
471 newCircle.mulb(btex $\times$ etex);
474 in.c = two.c + (0cm, 1cm);
475 mula.c = in.c + (2cm, 0cm);
476 mulb.c = mula.c + (2cm, 0cm);
477 out.c = mulb.c + (2cm, 0cm);
479 % Draw objects and lines
480 drawObj(in, two, mula, mulb, out);
482 nccurve(two)(mula) "angleA(0)", "angleB(45)";
483 nccurve(two)(mulb) "angleA(0)", "angleB(45)";
489 \placeexample[here][ex:Quadruple]{Simple three port and.}
490 \startcombination[2*1]
491 {\typebufferhs{Quadruple}}{Haskell description using function applications.}
492 {\boxedgraphic{Quadruple}}{The architecture described by the Haskell description.}
495 Here, the definition of mul is a partial function application: It applies
496 the function \hs{(*) :: Word -> Word -> Word} to the value \hs{2 :: Word},
497 resulting in the expression \hs{(*) 2 :: Word -> Word}. Since this resulting
498 expression is again a function, we can't generate hardware for it directly.
499 This is because the hardware to generate for \hs{mul} depends completely on
500 where and how it is used. In this example, it is even used twice.
502 However, it is clear that the above hardware description actually describes
503 valid hardware. In general, we can see that any partial applied function
504 must eventually become completely applied, at which point we can generate
505 hardware for it using the rules for function application given in
506 \in{section}[sec:description:application]. It might mean that a partial
507 application is passed around quite a bit (even beyond function boundaries),
508 but eventually, the partial application will become completely applied.
509 \todo{Provide a step-by-step example of how this works}
511 \section{Costless specialization}
512 Each (complete) function application in our description generates a
513 component instantiation, or a specific piece of hardware in the final
514 design. It is interesting to note that each application of a function
515 generates a \emph{separate} piece of hardware. In the final design, none
516 of the hardware is shared between applications, even when the applied
517 function is the same (of course, if a particular value, such as the result
518 of a function application, is used twice, it is not calculated twice).
520 This is distinctly different from normal program compilation: Two separate
521 calls to the same function share the same machine code. Having more
522 machine code has implications for speed (due to less efficient caching)
523 and memory usage. For normal compilation, it is therefore important to
524 keep the amount of functions limited and maximize the code sharing.
526 When generating hardware, this is hardly an issue. Having more \quote{code
527 sharing} does reduce the amount of \small{VHDL} output (Since different
528 component instantiations still share the same component), but after
529 synthesis, the amount of hardware generated is not affected.
531 In particular, if we would duplicate all functions so that there is a
532 separate function for every application in the program (\eg, each function
533 is then only applied exactly once), there would be no increase in hardware
536 Because of this, a common optimization technique called
537 \emph{specialization} can be applied to hardware generation without any
538 performance or area cost (unlike for software).
540 \fxnote{Perhaps these next three sections are a bit too
541 implementation-oriented?}
543 \subsection{Specialization}
544 \defref{specialization}
545 Given some function that has a \emph{domain} $D$ (\eg, the set of all
546 possible arguments that could be applied), we create a specialized
547 function with exactly the same behaviour, but with a domain $D' \subset
548 D$. This subset can be chosen in all sorts of ways. Any subset is valid
549 for the general definition of specialization, but in practice only some
550 of them provide useful optimization opportunities.
552 Common subsets include limiting a polymorphic argument to a single type
553 (\ie, removing polymorphism) or limiting an argument to just a single
554 value (\ie, cross-function constant propagation, effectively removing
557 Since we limit the argument domain of the specialized function, its
558 definition can often be optimized further (since now more types or even
559 values of arguments are already known). By replacing any application of
560 the function that falls within the reduced domain by an application of
561 the specialized version, the code gets faster (but the code also gets
562 bigger, since we now have two versions instead of one). If we apply
563 this technique often enough, we can often replace all applications of a
564 function by specialized versions, allowing the original function to be
565 removed (in some cases, this can even give a net reduction of the code
566 compared to the non-specialized version).
568 Specialization is useful for our hardware descriptions for functions
569 that contain arguments that cannot be translated to hardware directly
570 (polymorphic or higher order arguments, for example). If we can create
571 specialized functions that remove the argument, or make it translatable,
572 we can use specialization to make the original, untranslatable, function
575 \section{Higher order values}
576 What holds for partial application, can be easily generalized to any
577 higher order expression. This includes partial applications, plain
578 variables (e.g., a binder referring to a top level function), lambda
579 expressions and more complex expressions with a function type (a \hs{case}
580 expression returning lambda's, for example).
582 Each of these values cannot be directly represented in hardware (just like
583 partial applications). Also, to make them representable, they need to be
584 applied: function variables and partial applications will then eventually
585 become complete applications, applied lambda expressions disappear by
586 applying β-reduction, etc.
588 So any higher order value will be \quote{pushed down} towards its
589 application just like partial applications. Whenever a function boundary
590 needs to be crossed, the called function can be specialized.
592 \fxnote{This section needs improvement and an example}
594 \section{Polymorphism}
595 In Haskell, values can be \emph{polymorphic}: They can have multiple types. For
596 example, the function \hs{fst :: (a, b) -> a} is an example of a
597 polymorphic function: It works for tuples with any two element types. Haskell
598 typeclasses allow a function to work on a specific set of types, but the
599 general idea is the same. The opposite of this is a \emph{monomorphic}
600 value, which has a single, fixed, type.
602 % A type class is a collection of types for which some operations are
603 % defined. It is thus possible for a value to be polymorphic while having
604 % any number of \emph{class constraints}: The value is not defined for
605 % every type, but only for types in the type class. An example of this is
606 % the \hs{even :: (Integral a) => a -> Bool} function, which can map any
607 % value of a type that is member of the \hs{Integral} type class
609 When generating hardware, polymorphism can't be easily translated. How
610 many wires will you lay down for a value that could have any type? When
611 type classes are involved, what hardware components will you lay down for
612 a class method (whose behaviour depends on the type of its arguments)?
613 Note that the language currently does not allow user-defined typeclasses,
614 but does support partly some of the builtin typeclasses (like \hs{Num}).
616 Fortunately, we can again use the principle of specialization: Since every
617 function application generates a separate piece of hardware, we can know
618 the types of all arguments exactly. Provided that we don't use existential
619 typing, all of the polymorphic types in a function must depend on the
620 types of the arguments (In other words, the only way to introduce a type
621 variable is in a lambda abstraction).
623 If a function is monomorphic, all values inside it are monomorphic as
624 well, so any function that is applied within the function can only be
625 applied to monomorphic values. The applied functions can then be
626 specialized to work just for these specific types, removing the
627 polymorphism from the applied functions as well.
629 Our top level function must not have a polymorphic type (otherwise we
630 wouldn't know the hardware interface to our top level function).
632 By induction, this means that all functions that are (indirectly) called
633 by our top level function (meaning all functions that are translated in
634 the final hardware) become monomorphic.
637 A very important concept in hardware designs is \emph{state}. In a
638 stateless (or, \emph{combinatoric}) design, every output is directly and solely dependent on the
639 inputs. In a stateful design, the outputs can depend on the history of
640 inputs, or the \emph{state}. State is usually stored in \emph{registers},
641 which retain their value during a clockcycle, and are typically updated at
642 the start of every clockcycle. Since the updating of the state is tightly
643 coupled (synchronized) to the clock signal, these state updates are often
644 called \emph{synchronous} behaviour.
646 \todo{Sidenote? Registers can contain any (complex) type}
648 To make our hardware description language useful to describe more than
649 simple combinatoric designs, we'll need to be able to describe state in
652 \subsection{Approaches to state}
653 In Haskell, functions are always pure (except when using unsafe
654 functions with the \hs{IO} monad, which is not supported by Cλash). This
655 means that the output of a function solely depends on its inputs. If you
656 evaluate a given function with given inputs, it will always provide the
661 This is a perfect match for a combinatoric circuit, where the output
662 also soley depends on the inputs. However, when state is involved, this
663 no longer holds. Since we're in charge of our own language (or at least
664 let's pretend we aren't using Haskell and we are), we could remove this
665 purity constraint and allow a function to return different values
666 depending on the cycle in which it is evaluated (or rather, the current
667 state). However, this means that all kinds of interesting properties of
668 our functional language get lost, and all kinds of transformations and
669 optimizations might no longer be meaning preserving.
671 Provided that we want to keep the function pure, the current state has
672 to be present in the function's arguments in some way. There seem to be
673 two obvious ways to do this: Adding the current state as an argument, or
674 including the full history of each argument.
676 \subsubsection{Stream arguments and results}
677 Including the entire history of each input (\eg, the value of that
678 input for each previous clockcycle) is an obvious way to make outputs
679 depend on all previous input. This is easily done by making every
680 input a list instead of a single value, containing all previous values
681 as well as the current value.
683 An obvious downside of this solution is that on each cycle, all the
684 previous cycles must be resimulated to obtain the current state. To do
685 this, it might be needed to have a recursive helper function as well,
686 wich might be hard to be properly analyzed by the compiler.
688 A slight variation on this approach is one taken by some of the other
689 functional \small{HDL}s in the field: \todo{References to Lava,
690 ForSyDe, ...} Make functions operate on complete streams. This means
691 that a function is no longer called on every cycle, but just once. It
692 takes stream as inputs instead of values, where each stream contains
693 all the values for every clockcycle since system start. This is easily
694 modeled using an (infinite) list, with one element for each clock
695 cycle. Since the function is only evaluated once, its output must also
696 be a stream. Note that, since we are working with infinite lists and
697 still want to be able to simulate the system cycle-by-cycle, this
698 relies heavily on the lazy semantics of Haskell.
700 Since our inputs and outputs are streams, all other (intermediate)
701 values must be streams. All of our primitive operators (\eg, addition,
702 substraction, bitwise operations, etc.) must operate on streams as
703 well (note that changing a single-element operation to a stream
704 operation can done with \hs{map}, \hs{zipwith}, etc.).
706 Note that the concept of \emph{state} is no more than having some way
707 to communicate a value from one cycle to the next. By introducing a
708 \hs{delay} function, we can do exactly that: Delay (each value in) a
709 stream so that we can "look into" the past. This \hs{delay} function
710 simply outputs a stream where each value is the same as the input
711 value, but shifted one cycle. This causes a \quote{gap} at the
712 beginning of the stream: What is the value of the delay output in the
713 first cycle? For this, the \hs{delay} function has a second input
714 (which is a value, not a stream!).
716 \in{Example}[ex:DelayAcc] shows a simple accumulator expressed in this
719 \startbuffer[DelayAcc]
720 acc :: Stream Word -> Stream Word
723 out = (delay out 0) + in
726 \startuseMPgraphic{DelayAcc}
727 save in, out, add, reg;
730 newCircle.in(btex $in$ etex) "framed(false)";
731 newCircle.out(btex $out$ etex) "framed(false)";
734 newReg.reg("") "dx(4mm)", "dy(6mm)", "reflect(true)";
735 newCircle.add(btex + etex);
738 add.c = in.c + (2cm, 0cm);
739 out.c = add.c + (2cm, 0cm);
740 reg.c = add.c + (0cm, 2cm);
742 % Draw objects and lines
743 drawObj(in, out, add, reg);
745 nccurve(add)(reg) "angleA(0)", "angleB(180)", "posB(d)";
746 nccurve(reg)(add) "angleA(180)", "angleB(-45)", "posA(out)";
752 \placeexample[here][ex:DelayAcc]{Simple accumulator architecture.}
753 \startcombination[2*1]
754 {\typebufferhs{DelayAcc}}{Haskell description using streams.}
755 {\boxedgraphic{DelayAcc}}{The architecture described by the Haskell description.}
759 This notation can be confusing (especially due to the loop in the
760 definition of out), but is essentially easy to interpret. There is a
761 single call to delay, resulting in a circuit with a single register,
762 whose input is connected to \hs{out} (which is the output of the
763 adder), and it's output is the expression \hs{delay out 0} (which is
764 connected to one of the adder inputs).
766 This notation has a number of downsides, amongst which are limited
767 readability and ambiguity in the interpretation. \todo{Reference
768 Christiaan, who has done further investigation}
770 \subsubsection{Explicit state arguments and results}
771 A more explicit way to model state, is to simply add an extra argument
772 containing the current state value. This allows an output to depend on
773 both the inputs as well as the current state while keeping the
774 function pure (letting the result depend only on the arguments), since
775 the current state is now an argument.
777 In Haskell, this would look like
778 \in{example}[ex:ExplicitAcc]\footnote[notfinalsyntax]{Note that this example is not in the final
779 Cλash syntax}. \todo{Referencing notfinalsyntax from Introduction.tex doesn't
782 \startbuffer[ExplicitAcc]
783 -- input -> current state -> (new state, output)
784 acc :: Word -> Word -> (Word, Word)
791 \placeexample[here][ex:ExplicitAcc]{Simple accumulator architecture.}
792 \startcombination[2*1]
793 {\typebufferhs{ExplicitAcc}}{Haskell description using explicit state arguments.}
794 % Picture is identical to the one we had just now.
795 {\boxedgraphic{DelayAcc}}{The architecture described by the Haskell description.}
798 This approach makes a function's state very explicit, which state
799 variables are used by a function can be completely determined from its
800 type signature (as opposed to the stream approach, where a function
801 looks the same from the outside, regardless of what state variables it
802 uses or whether it's stateful at all).
804 This approach is the one chosen for Cλash and will be examined more
807 \subsection{Explicit state specification}
808 We've seen the concept of explicit state in a simple example below, but
809 what are the implications of this approach?
811 \subsubsection{Substates}
812 Since a function's state is reflected directly in its type signature,
813 if a function calls other stateful functions (\eg, has subcircuits), it
814 has to somehow know the current state for these called functions. The
815 only way to do this, is to put these \emph{substates} inside the
816 caller's state. This means that a function's state is the sum of the
817 states of all functions it calls, and its own state. This sum
818 can be obtained using something simple like a tuple, or possibly
819 custom algebraic types for clarity.
821 This also means that the type of a function (at least the "state"
822 part) is dependent on its own implementation and of the functions it
825 This is the major downside of this approach: The separation between
826 interface and implementation is limited. However, since Cλash is not
827 very suitable for separate compilation (see
828 \in{section}[sec:prototype:separate]) this is not a big problem in
831 Additionally, when using a type synonym for the state type
832 of each function, we can still provide explicit type signatures
833 while keeping the state specification for a function near its
837 \subsubsection{Which arguments and results are stateful?}
838 \fxnote{This section should get some examples}
839 We need some way to know which arguments should become input ports and
840 which argument(s?) should become the current state (\eg, be bound to
841 the register outputs). This does not hold just for the top
842 level function, but also for any subfunction. Or could we perhaps
843 deduce the statefulness of subfunctions by analyzing the flow of data
844 in the calling functions?
846 To explore this matter, the following observeration is interesting: We
847 get completely correct behaviour when we put all state registers in
848 the top level entity (or even outside of it). All of the state
849 arguments and results on subfunctions are treated as normal input and
850 output ports. Effectively, a stateful function results in a stateless
851 hardware component that has one of its input ports connected to the
852 output of a register and one of its output ports connected to the
853 input of the same register.
857 Of course, even though the hardware described like this has the
858 correct behaviour, unless the layout tool does smart optimizations,
859 there will be a lot of extra wire in the design (since registers will
860 not be close to the component that uses them). Also, when working with
861 the generated \small{VHDL} code, there will be a lot of extra ports
862 just to pass on state values, which can get quite confusing.
864 To fix this, we can simply \quote{push} the registers down into the
865 subcircuits. When we see a register that is connected directly to a
866 subcircuit, we remove the corresponding input and output port and put
867 the register inside the subcircuit instead. This is slightly less
868 trivial when looking at the Haskell code instead of the resulting
869 circuit, but the idea is still the same.
873 However, when applying this technique, we might push registers down
874 too far. When you intend to store a result of a stateless subfunction
875 in the caller's state and pass the current value of that state
876 variable to that same function, the register might get pushed down too
877 far. It is impossible to distinguish this case from similar code where
878 the called function is in fact stateful. From this we can conclude
879 that we have to either:
881 \todo{Example of wrong downpushing}
884 \item accept that the generated hardware might not be exactly what we
885 intended, in some specific cases. In most cases, the hardware will be
887 \item explicitely annotate state arguments and results in the input
891 The first option causes (non-obvious) exceptions in the language
892 intepretation. Also, automatically determining where registers should
893 end up is easier to implement correctly with explicit annotations, so
894 for these reasons we will look at how this annotations could work.
896 \todo{Sidenote: One or more state arguments?}
898 \subsection[sec:description:stateann]{Explicit state annotation}
899 To make our stateful descriptions unambigious and easier to translate,
900 we need some way for the developer to describe which arguments and
901 results are intended to become stateful.
903 Roughly, we have two ways to achieve this:
905 \item Use some kind of annotation method or syntactic construction in
906 the language to indicate exactly which argument and (part of the)
907 result is stateful. This means that the annotation lives
908 \quote{outside} of the function, it is completely invisible when
909 looking at the function body.
910 \item Use some kind of annotation on the type level, \ie give stateful
911 arguments and stateful (parts of) results a different type. This has the
912 potential to make this annotation visible inside the function as well,
913 such that when looking at a value inside the function body you can
914 tell if it's stateful by looking at its type. This could possibly make
915 the translation process a lot easier, since less analysis of the
916 program flow might be required.
919 From these approaches, the type level \quote{annotations} have been
920 implemented in Cλash. \in{Section}[sec:prototype:statetype] expands on
921 the possible ways this could have been implemented.
923 \todo{Note about conditions on state variables and checking them}
925 \section[sec:recursion]{Recursion}
926 An import concept in functional languages is recursion. In it's most basic
927 form, recursion is a definition that is defined in terms of itself. A
928 recursive function is thus a function that uses itself in its body. This
929 usually requires multiple evaluations of this function, with changing
930 arguments, until eventually an evaluation of the function no longer requires
933 Recursion in a hardware description is a bit of a funny thing. Usually,
934 recursion is associated with a lot of nondeterminism, stack overflows, but
935 also flexibility and expressive power.
937 Given the notion that each function application will translate to a
938 component instantiation, we are presented with a problem. A recursive
939 function would translate to a component that contains itself. Or, more
940 precisely, that contains an instance of itself. This instance would again
941 contain an instance of itself, and again, into infinity. This is obviously a
942 problem for generating hardware.
944 This is expected for functions that describe infinite recursion. In that
945 case, we can't generate hardware that shows correct behaviour in a single
946 cycle (at best, we could generate hardware that needs an infinite number of
949 However, most recursive hardware descriptions will describe finite
950 recursion. This is because the recursive call is done conditionally. There
951 is usually a \hs{case} expression where at least one alternative does not contain
952 the recursive call, which we call the "base case". If, for each call to the
953 recursive function, we would be able to detect at compile time which
954 alternative applies, we would be able to remove the \hs{case} expression and
955 leave only the base case when it applies. This will ensure that expanding
956 the recursive functions will terminate after a bounded number of expansions.
958 This does imply the extra requirement that the base case is detectable at
959 compile time. In particular, this means that the decision between the base
960 case and the recursive case must not depend on runtime data.
962 \subsection{List recursion}
963 The most common deciding factor in recursion is the length of a list that is
964 passed in as an argument. Since we represent lists as vectors that encode
965 the length in the vector type, it seems easy to determine the base case. We
966 can simply look at the argument type for this. However, it turns out that
967 this is rather non-trivial to write down in Haskell already, not even
968 looking at translation. As an example, we would like to write down something
972 sum :: Vector n Word -> Word
973 sum xs = case null xs of
975 False -> head xs + sum (tail xs)
978 However, the Haskell typechecker will now use the following reasoning (element
979 type of the vector is left out). Below, we write conditions on type
980 variables before the \hs{=>} operator. This is not completely valid Haskell
981 syntax, but serves to illustrate the typechecker reasoning. Also note that a
982 vector can never have a negative length, so \hs{Vector n} implicitly means
983 \hs{(n >= 0) => Vector n}.
985 \todo{This typechecker disregards the type signature}
987 \item tail has the type \hs{(n > 0) => Vector n -> Vector (n - 1)}
988 \item This means that xs must have the type \hs{(n > 0) => Vector n}
989 \item This means that sum must have the type \hs{(n > 0) => Vector n -> a}
990 \item sum is called with the result of tail as an argument, which has the
991 type \hs{Vector n} (since \hs{(n > 0) => Vector (n - 1)} is the same as \hs{(n >= 0)
992 => Vector n}, which is the same as just \hs{Vector n}).
993 \item This means that sum must have the type \hs{Vector n -> a}
994 \item This is a contradiction between the type deduced from the body of sum
995 (the input vector must be non-empty) and the use of sum (the input vector
996 could have any length).
999 As you can see, using a simple \hs{case} expression at value level causes
1000 the type checker to always typecheck both alternatives, which can't be done!
1001 This is a fundamental problem, that would seem perfectly suited for a type
1002 class. Considering that we need to switch between to implementations of the
1003 sum function, based on the type of the argument, this sounds like the
1004 perfect problem to solve with a type class. However, this approach has its
1005 own problems (not the least of them that you need to define a new typeclass
1006 for every recursive function you want to define).
1008 Another approach tried involved using GADTs to be able to do pattern
1009 matching on empty / non empty lists. While this worked partially, it also
1010 created problems with more complex expressions.
1012 \note{This should reference Christiaan}
1014 Evaluating all possible (and non-possible) ways to add recursion to our
1015 descriptions, it seems better to leave out list recursion alltogether. This
1016 allows us to focus on other interesting areas instead. By including
1017 (builtin) support for a number of higher order functions like map and fold,
1018 we can still express most of the things we would use list recursion for.
1020 \todo{Expand on this decision a bit}
1022 \subsection{General recursion}
1023 Of course there are other forms of recursion, that do not depend on the
1024 length (and thus type) of a list. For example, simple recursion using a
1025 counter could be expressed, but only translated to hardware for a fixed
1026 number of iterations. Also, this would require extensive support for compile
1027 time simplification (constant propagation) and compile time evaluation
1028 (evaluation of constant comparisons), to ensure non-termination. Even then, it
1029 is hard to really guarantee termination, since the user (or GHC desugarer)
1030 might use some obscure notation that results in a corner case of the
1031 simplifier that is not caught and thus non-termination.
1033 Due to these complications and limited time available, we leave other forms
1034 of recursion as future work as well.
1036 % vim: set sw=2 sts=2 expandtab: