1 \chapter[chap:description]{Hardware description}
2 In this chapter an overview will be provided of the hardware
3 description language that was created and the issues that have arisen
4 in the process. The focus will be on the issues of the language, not
5 the implementation. The prototype implementation will be discussed in
6 \in{chapter}[chap:prototype].
8 To translate Haskell to hardware, every Haskell construct needs a
9 translation to \VHDL. There are often multiple valid translations
10 possible. When faced with choices, the most obvious choice has been
11 chosen wherever possible. In a lot of cases, when a programmer looks
12 at a functional hardware description it is completely clear what
13 hardware is described. We want our translator to generate exactly that
14 hardware whenever possible, to make working with Cλash as intuitive as
18 \defref{reading examples}
19 \startframedtext[width=9cm,background=box,frame=no]
20 \startalignment[center]
21 {\tfa Reading the examples}
24 In this thesis, a lot of functional code will be presented. Part of this
25 will be valid Cλash code, but others will just be small Haskell or Core
26 snippets to illustrate a concept.
28 In these examples, some functions and types will be used, without
29 properly defining every one of them. These functions (like \hs{and},
30 \hs{not}, \hs{add}, \hs{+}, etc.) and types (like \hs{Bit}, \hs{Word},
31 \hs{Bool}, etc.) are usually pretty self-explanatory.
33 The special type \hs{[t]} means \quote{list of \hs{t}'s}, where \hs{t}
34 can be any other type.
36 Of particular note is the use of the \hs{::} operator. It is used in
37 Haskell to explicitly declare the type of function or let binding. In
38 these examples and the text, we will occasionally use this operator to
39 show the type of arbitrary expressions, even where this would not
40 normally be valid. Just reading the \hs{::} operator as \quote{and also
41 note that \emph{this} expression has \emph{this} type} should work out.
45 In this chapter we describe how to interpret a Haskell program from a
46 hardware perspective. We provide a description of each Haskell language
47 element that needs translation, to provide a clear picture of what is
51 \section[sec:description:application]{Function application}
52 The basic syntactic elements of a functional program are functions and
53 function application. These have a single obvious \small{VHDL}
54 translation: each top level function becomes a hardware component, where each
55 argument is an input port and the result value is the (single) output
56 port. This output port can have a complex type (such as a tuple), so
57 having just a single output port does not pose a limitation.
59 Each function application in turn becomes component instantiation. Here, the
60 result of each argument expression is assigned to a signal, which is mapped
61 to the corresponding input port. The output port of the function is also
62 mapped to a signal, which is used as the result of the application.
64 Since every top level function generates its own component, the
65 hierarchy of of function calls is reflected in the final \VHDL\ output
66 as well, creating a hierarchical \VHDL\ description of the hardware.
67 This separation in different components makes the resulting \VHDL\
68 output easier to read and debug.
70 \in{Example}[ex:And3] shows a simple program using only function
71 application and the corresponding architecture.
74 -- A simple function that returns
75 -- conjunction of three bits
76 and3 :: Bit -> Bit -> Bit -> Bit
77 and3 a b c = and (and a b) c
80 \startuseMPgraphic{And3}
81 save a, b, c, anda, andb, out;
84 newCircle.a(btex $a$ etex) "framed(false)";
85 newCircle.b(btex $b$ etex) "framed(false)";
86 newCircle.c(btex $c$ etex) "framed(false)";
87 newCircle.out(btex $out$ etex) "framed(false)";
90 newCircle.anda(btex $and$ etex);
91 newCircle.andb(btex $and$ etex);
94 b.c = a.c + (0cm, 1cm);
95 c.c = b.c + (0cm, 1cm);
96 anda.c = midpoint(a.c, b.c) + (2cm, 0cm);
97 andb.c = midpoint(b.c, c.c) + (4cm, 0cm);
99 out.c = andb.c + (2cm, 0cm);
101 % Draw objects and lines
102 drawObj(a, b, c, anda, andb, out);
104 ncarc(a)(anda) "arcangle(-10)";
111 \startbuffer[And3VHDL]
112 entity and3Component_0 is
113 port (\azMyG2\ : in std_logic;
114 \bzMyI2\ : in std_logic;
115 \czMyK2\ : in std_logic;
116 \foozMySzMyS2\ : out std_logic;
117 clock : in std_logic;
118 resetn : in std_logic);
119 end entity and3Component_0;
122 architecture structural of and3Component_0 is
123 signal \argzMyMzMyM2\ : std_logic;
125 \argzMyMzMyM2\ <= \azMyG2\ and \bzMyI2\;
127 \foozMySzMyS2\ <= \argzMyMzMyM2\ and \czMyK2\;
128 end architecture structural;
131 \placeexample[][ex:And3]{Simple three input and gate.}
132 \startcombination[2*1]
133 {\typebufferhs{And3}}{Haskell description using function applications.}
134 {\boxedgraphic{And3}}{The architecture described by the Haskell description.}
137 \placeexample[][ex:And3VHDL]{\VHDL\ generated for \hs{and3} from \in{example}[ex:And3]}
138 {\typebuffervhdl{And3VHDL}}
141 \defref{top level binding}
142 \defref{top level binder}
143 \defref{top level function}
144 \startframedtext[width=8cm,background=box,frame=no]
145 \startalignment[center]
146 {\tfa Top level binders and functions}
149 A \emph{top level binder} is any binder (variable) that is
150 declared in the \quote{global} scope of a Haskell program (as
151 opposed to a binder that is bound inside a function. The binder
152 together with its body is referred to as a \emph{top level
155 In Haskell, there is no sharp distinction between a variable and a
156 function: a function is just a variable (binder) with a function
157 type. This means that a \emph{top level function} is just any top
158 level binder with a function type. This also means that sometimes
159 top level function will be used when top level binder is really
162 As an example, consider the following Haskell snippet:
171 Here, \hs{foo} is a top level binder, whereas \hs{inc} is a
172 function (since it is bound to a lambda extraction, indicated by
173 the backslash) but is not a top level binder or function. Since
174 the type of \hs{foo} is a function type, namely \hs{Int -> Int},
175 it is also a top level function.
179 Although describing components and connections allows us to describe a lot of
180 hardware designs already, there is an obvious thing missing: choice. We
181 need some way to be able to choose between values based on another value.
182 In Haskell, choice is achieved by \hs{case} expressions, \hs{if}
183 expressions, pattern matching and guards.
185 An obvious way to add choice to our language without having to recognize
186 any of Haskell's syntax, would be to add a primitive \quote{\hs{if}}
187 function. This function would take three arguments: the condition, the
188 value to return when the condition is true and the value to return when
189 the condition is false.
191 This \hs{if} function would then essentially describe a multiplexer and
192 allows us to describe any architecture that uses multiplexers.
194 However, to be able to describe our hardware in a more convenient way, we
195 also want to translate Haskell's choice mechanisms. The easiest of these
196 are of course case expressions (and \hs{if} expressions, which can be very
197 directly translated to \hs{case} expressions). A \hs{case} expression can in turn
198 simply be translated to a conditional assignment, where the conditions use
199 equality comparisons against the constructors in the \hs{case} expressions.
201 In \in{example}[ex:Inv] two versions of an inverter are shown. The first
202 uses a simple \hs{case} expression, scrutinizing a Boolean value. The
203 corresponding architecture has a comparator to determine which of the
204 constructors is on the \hs{in} input. There is a multiplexer to select the
205 output signal (which is just a conditional assignment in the generated
206 \VHDL). The two options for the output signals are just constants,
207 but these could have been more complex expressions (in which case also
208 both of them would be working in parallel, regardless of which output
209 would be chosen eventually). The \VHDL\ generated for (both versions of)
210 this inverter is shown in \in{example}[ex:InvVHDL].
212 If we would translate a Boolean to a bit value, we could of course remove
213 the comparator and directly feed 'in' into the multiplexer (or even use an
214 inverter instead of a multiplexer). However, we will try to make a
215 general translation, which works for all possible \hs{case} expressions.
216 Optimizations such as these are left for the \VHDL\ synthesizer, which
217 handles them very well.
220 \startframedtext[width=8cm,background=box,frame=no]
221 \startalignment[center]
222 {\tfa Arguments / results vs. inputs / outputs}
225 Due to the translation chosen for function application, there is a
226 very strong relation between arguments, results, inputs and outputs.
227 For clarity, the former two will always refer to the arguments and
228 results in the functional description (either Haskell or Core). The
229 latter two will refer to input and output ports in the generated
232 Even though these concepts seem to be nearly identical, when stateful
233 functions are introduces we will see arguments and results that will
234 not get translated into input and output ports, making this
235 distinction more important.
239 A slightly more complex (but very powerful) form of choice is pattern
240 matching. A function can be defined in multiple clauses, where each clause
241 specifies a pattern. When the arguments match the pattern, the
242 corresponding clause will be used.
244 \in{Example}[ex:Inv] also shows an inverter that uses pattern matching.
245 The architecture it describes is of course the
246 same one as the description with a case expression. The general interpretation
247 of pattern matching is also similar to that of \hs{case} expressions: generate
248 hardware for each of the clauses (like each of the clauses of a \hs{case}
249 expression) and connect them to the function output through (a number of
250 nested) multiplexers. These multiplexers are driven by comparators and
251 other logic, that check each pattern in turn.
253 In these examples we have seen only binary case expressions and pattern
254 matches (\ie, with two alternatives). In practice, case expressions can
255 choose between more than two values, resulting in a number of nested
258 \startbuffer[CaseInv]
265 \startbuffer[PatternInv]
271 \startuseMPgraphic{Inv}
272 save in, truecmp, falseout, trueout, out, cmp, mux;
275 newCircle.in(btex $in$ etex) "framed(false)";
276 newCircle.out(btex $out$ etex) "framed(false)";
278 newBox.truecmp(btex $True$ etex) "framed(false)";
279 newBox.trueout(btex $True$ etex) "framed(false)";
280 newBox.falseout(btex $False$ etex) "framed(false)";
283 newCircle.cmp(btex $==$ etex);
287 cmp.c = in.c + (3cm, 0cm);
288 truecmp.c = cmp.c + (-1cm, 1cm);
289 mux.sel = cmp.e + (1cm, -1cm);
290 falseout.c = mux.inpa - (2cm, 0cm);
291 trueout.c = mux.inpb - (2cm, 0cm);
292 out.c = mux.out + (2cm, 0cm);
294 % Draw objects and lines
295 drawObj(in, out, truecmp, trueout, falseout, cmp, mux);
299 nccurve(cmp.e)(mux.sel) "angleA(0)", "angleB(-90)";
300 ncline(falseout)(mux) "posB(inpa)";
301 ncline(trueout)(mux) "posB(inpb)";
302 ncline(mux)(out) "posA(out)";
305 \startbuffer[InvVHDL]
306 entity invComponent_0 is
307 port (\xzAMo2\ : in boolean;
308 \reszAMuzAMu2\ : out boolean;
309 clock : in std_logic;
310 resetn : in std_logic);
311 end entity invComponent_0;
314 architecture structural of invComponent_0 is
316 \reszAMuzAMu2\ <= false when \xzAMo2\ = true else
318 end architecture structural;
321 \placeexample[][ex:Inv]{Simple inverter.}{
322 % Use placesidebyside, since nesting combinations doesn't seem to work
323 % here. This does break centering, but well...
325 % Use 2*2 instead of 1*2 to insert some extra space (\placesidebyside
326 % places stuff very close together)
327 {\startcombination[2*2]
328 {\typebufferhs{CaseInv}}{Haskell description using a Case expression.}
330 {\typebufferhs{PatternInv}}{Haskell description using Pattern matching expression.}
333 % Use a 1*1 combination to add a caption
334 {\startcombination[1*1]
335 {\boxedgraphic{Inv}}{The architecture described by the Haskell descriptions.}
339 % \placeexample[][ex:Inv]{Simple inverter.}{
340 % \startcombination[2*2]
341 % {\typebufferhs{CaseInv}}{Haskell description using a Case expression.}
343 % {\typebufferhs{PatternInv}}{Haskell description using Pattern matching expression.}
344 % {\boxedgraphic{Inv}}{The architecture described by the Haskell description.}
347 \placeexample[][ex:InvVHDL]{\VHDL\ generated for (both versions of) \hs{inv} from \in{example}[ex:Inv]}
348 {\typebuffervhdl{InvVHDL}}
351 Translation of two most basic functional concepts has been
352 discussed: function application and choice. Before looking further
353 into less obvious concepts like higher-order expressions and
354 polymorphism, the possible types that can be used in hardware
355 descriptions will be discussed.
357 Some way is needed to translate every values used to its hardware
358 equivalents. In particular, this means a hardware equivalent for
359 every \emph{type} used in a hardware description is needed
361 Since most functional languages have a lot of standard types that
362 are hard to translate (integers without a fixed size, lists without
363 a static length, etc.), a number of \quote{built-in} types will be
364 defined first. These types are built-in in the sense that our
365 compiler will have a fixed VHDL type for these. User defined types,
366 on the other hand, will have their hardware type derived directly
367 from their Haskell declaration automatically, according to the rules
370 \todo{Introduce Haskell type syntax (type constructors, type application,
373 \subsection{Built-in types}
374 The language currently supports the following built-in types. Of these,
375 only the \hs{Bool} type is supported by Haskell out of the box (the
376 others are defined by the Cλash package, so they are user-defined types
377 from Haskell's point of view).
380 This is the most basic type available. It is mapped directly onto
381 the \type{std_logic} \small{VHDL} type. Mapping this to the
382 \type{bit} type might make more sense (since the Haskell version
383 only has two values), but using \type{std_logic} is more standard
384 (and allowed for some experimentation with don't care values)
386 \todo{Sidenote bit vs stdlogic}
388 \startdesc{\hs{Bool}}
389 This is the only built-in Haskell type supported and is translated
390 exactly like the Bit type (where a value of \hs{True} corresponds to a
391 value of \hs{High}). Supporting the Bool type is particularly
392 useful to support \hs{if ... then ... else ...} expressions, which
393 always have a \hs{Bool} value for the condition.
395 A \hs{Bool} is translated to a \type{std_logic}, just like \hs{Bit}.
397 \startdesc{\hs{SizedWord}, \hs{SizedInt}}
398 These are types to represent integers. A \hs{SizedWord} is unsigned,
399 while a \hs{SizedInt} is signed. These types are parametrized by a
400 length type, so you can define an unsigned word of 32 bits wide as
404 type Word32 = SizedWord D32
407 Here, a type synonym \hs{Word32} is defined that is equal to the
408 \hs{SizedWord} type constructor applied to the type \hs{D32}. \hs{D32}
409 is the \emph{type level representation} of the decimal number 32,
410 making the \hs{Word32} type a 32-bit unsigned word.
412 These types are translated to the \small{VHDL} \type{unsigned} and
413 \type{signed} respectively.
414 \todo{Sidenote on dependent typing?}
416 \startdesc{\hs{Vector}}
417 This is a vector type, that can contain elements of any other type and
418 has a fixed length. It has two type parameters: its
419 length and the type of the elements contained in it. By putting the
420 length parameter in the type, the length of a vector can be determined
421 at compile time, instead of only at run-time for conventional lists.
423 The \hs{Vector} type constructor takes two type arguments: the length
424 of the vector and the type of the elements contained in it. The state
425 type of an 8 element register bank would then for example be:
428 type RegisterState = Vector D8 Word32
431 Here, a type synonym \hs{RegisterState} is defined that is equal to
432 the \hs{Vector} type constructor applied to the types \hs{D8} (The type
433 level representation of the decimal number 8) and \hs{Word32} (The 32
434 bit word type as defined above). In other words, the
435 \hs{RegisterState} type is a vector of 8 32-bit words.
437 A fixed size vector is translated to a \small{VHDL} array type.
439 \startdesc{\hs{RangedWord}}
440 This is another type to describe integers, but unlike the previous
441 two it has no specific bit-width, but an upper bound. This means that
442 its range is not limited to powers of two, but can be any number.
443 A \hs{RangedWord} only has an upper bound, its lower bound is
444 implicitly zero. There is a lot of added implementation complexity
445 when adding a lower bound and having just an upper bound was enough
446 for the primary purpose of this type: type-safely indexing vectors.
448 To define an index for the 8 element vector above, we would do:
451 type RegisterIndex = RangedWord D7
454 Here, a type synonym \hs{RegisterIndex} is defined that is equal to
455 the \hs{RangedWord} type constructor applied to the type \hs{D7}. In
456 other words, this defines an unsigned word with values from
457 {\definedfont[Serif*normalnum]0 to 7} (inclusive). This word can be be used to index the
458 8 element vector \hs{RegisterState} above.
460 This type is translated to the \type{unsigned} \small{VHDL} type.
463 The integer and vector built-in types are discussed in more detail
466 \subsection{User-defined types}
467 There are three ways to define new types in Haskell: algebraic
468 data-types with the \hs{data} keyword, type synonyms with the \hs{type}
469 keyword and type renamings with the \hs{newtype} keyword. \GHC\
470 offers a few more advanced ways to introduce types (type families,
471 existential typing, \small{GADT}s, etc.) which are not standard
472 Haskell. These will be left outside the scope of this research.
474 Only an algebraic datatype declaration actually introduces a
475 completely new type, for which we provide the \VHDL\ translation
476 below. Type synonyms and renamings only define new names for
477 existing types (where synonyms are completely interchangeable and
478 renamings need explicit conversion). Therefore, these do not need
479 any particular \VHDL\ translation, a synonym or renamed type will
480 just use the same representation as the original type. The
481 distinction between a renaming and a synonym does no longer matter
482 in hardware and can be disregarded in the generated \VHDL.
484 For algebraic types, we can make the following distinction:
486 \startdesc{Product types}
487 A product type is an algebraic datatype with a single constructor with
488 two or more fields, denoted in practice like (a,b), (a,b,c), etc. This
489 is essentially a way to pack a few values together in a record-like
490 structure. In fact, the built-in tuple types are just algebraic product
491 types (and are thus supported in exactly the same way).
493 The \quote{product} in its name refers to the collection of values belonging
494 to this type. The collection for a product type is the Cartesian
495 product of the collections for the types of its fields.
497 These types are translated to \VHDL\ record types, with one field for
498 every field in the constructor. This translation applies to all single
499 constructor algebraic data-types, including those with just one
500 field (which are technically not a product, but generate a VHDL
501 record for implementation simplicity).
503 \startdesc{Enumerated types}
504 \defref{enumerated types}
505 An enumerated type is an algebraic datatype with multiple constructors, but
506 none of them have fields. This is essentially a way to get an
507 enumeration-like type containing alternatives.
509 Note that Haskell's \hs{Bool} type is also defined as an
510 enumeration type, but we have a fixed translation for that.
512 These types are translated to \VHDL\ enumerations, with one value for
513 each constructor. This allows references to these constructors to be
514 translated to the corresponding enumeration value.
516 \startdesc{Sum types}
517 A sum type is an algebraic datatype with multiple constructors, where
518 the constructors have one or more fields. Technically, a type with
519 more than one field per constructor is a sum of products type, but
520 for our purposes this distinction does not really make a
521 difference, so this distinction is note made.
523 The \quote{sum} in its name refers again to the collection of values
524 belonging to this type. The collection for a sum type is the
525 union of the the collections for each of the constructors.
527 Sum types are currently not supported by the prototype, since there is
528 no obvious \VHDL\ alternative. They can easily be emulated, however, as
529 we will see from an example:
532 data Sum = A Bit Word | B Word
535 An obvious way to translate this would be to create an enumeration to
536 distinguish the constructors and then create a big record that
537 contains all the fields of all the constructors. This is the same
538 translation that would result from the following enumeration and
539 product type (using a tuple for clarity):
543 type Sum = (SumC, Bit, Word, Word)
546 Here, the \hs{SumC} type effectively signals which of the latter three
547 fields of the \hs{Sum} type are valid (the first two if \hs{A}, the
548 last one if \hs{B}), all the other ones have no useful value.
550 An obvious problem with this naive approach is the space usage: the
551 example above generates a fairly big \VHDL\ type. Since we can be
552 sure that the two \hs{Word}s in the \hs{Sum} type will never be valid
553 at the same time, this is a waste of space.
555 Obviously, duplication detection could be used to reuse a
556 particular field for another constructor, but this would only
557 partially solve the problem. If two fields would be, for
558 example, an array of 8 bits and an 8 bit unsigned word, these are
559 different types and could not be shared. However, in the final
560 hardware, both of these types would simply be 8 bit connections,
561 so we have a 100\% size increase by not sharing these.
564 Another interesting case is that of recursive types. In Haskell, an
565 algebraic datatype can be recursive: any of its field types can be (or
566 contain) the type being defined. The most well-known recursive type is
567 probably the list type, which is defined is:
570 data List t = Empty | Cons t (List t)
573 Note that \hs{Empty} is usually written as \hs{[]} and \hs{Cons} as
574 \hs{:}, but this would make the definition harder to read. This
575 immediately shows the problem with recursive types: what hardware type
578 If the naive approach for sum types described above would be used,
579 a record would be created where the first field is an enumeration
580 to distinguish \hs{Empty} from \hs{Cons}. Furthermore, two more
581 fields would be added: one with the (\VHDL\ equivalent of) type
582 \hs{t} (assuming this type is actually known at compile time, this
583 should not be a problem) and a second one with type \hs{List t}.
584 The latter one is of course a problem: this is exactly the type
585 that was to be translated in the first place.
587 The resulting \VHDL\ type will thus become infinitely deep. In
588 other words, there is no way to statically determine how long
589 (deep) the list will be (it could even be infinite).
591 In general, recursive types can never be properly translated: all
592 recursive types have a potentially infinite value (even though in
593 practice they will have a bounded value, there is no way for the
594 compiler to automatically determine an upper bound on its size).
596 \subsection{Partial application}
597 Now the translation of application, choice and types has been
598 discussed, a more complex concept can be considered: partial
599 applications. A \emph{partial application} is any application whose
600 (return) type is (again) a function type.
602 From this, it should be clear that the translation rules for full
603 application does not apply to a partial application: there are not
604 enough values for all the input ports in the resulting \VHDL.
605 \in{Example}[ex:Quadruple] shows an example use of partial application
606 and the corresponding architecture.
608 \startbuffer[Quadruple]
609 -- Multiply the input word by four.
610 quadruple :: Word -> Word
611 quadruple n = mul (mul n)
616 \startuseMPgraphic{Quadruple}
617 save in, two, mula, mulb, out;
620 newCircle.in(btex $n$ etex) "framed(false)";
621 newCircle.two(btex $2$ etex) "framed(false)";
622 newCircle.out(btex $out$ etex) "framed(false)";
625 newCircle.mula(btex $\times$ etex);
626 newCircle.mulb(btex $\times$ etex);
629 in.c = two.c + (0cm, 1cm);
630 mula.c = in.c + (2cm, 0cm);
631 mulb.c = mula.c + (2cm, 0cm);
632 out.c = mulb.c + (2cm, 0cm);
634 % Draw objects and lines
635 drawObj(in, two, mula, mulb, out);
637 nccurve(two)(mula) "angleA(0)", "angleB(45)";
638 nccurve(two)(mulb) "angleA(0)", "angleB(45)";
644 \placeexample[][ex:Quadruple]{Simple three port and.}
645 \startcombination[2*1]
646 {\typebufferhs{Quadruple}}{Haskell description using function applications.}
647 {\boxedgraphic{Quadruple}}{The architecture described by the Haskell description.}
650 Here, the definition of mul is a partial function application: it applies
651 the function \hs{(*) :: Word -> Word -> Word} to the value \hs{2 :: Word},
652 resulting in the expression \hs{(*) 2 :: Word -> Word}. Since this resulting
653 expression is again a function, hardware cannot be generated for it
654 directly. This is because the hardware to generate for \hs{mul}
655 depends completely on where and how it is used. In this example, it is
658 However, it is clear that the above hardware description actually
659 describes valid hardware. In general, any partial applied function
660 must eventually become completely applied, at which point hardware for
661 it can be generated using the rules for function application given in
662 \in{section}[sec:description:application]. It might mean that a
663 partial application is passed around quite a bit (even beyond function
664 boundaries), but eventually, the partial application will become
665 completely applied. An example of this principle is given in
666 \in{section}[sec:normalization:defunctionalization].
668 \section{Costless specialization}
669 Each (complete) function application in our description generates a
670 component instantiation, or a specific piece of hardware in the final
671 design. It is interesting to note that each application of a function
672 generates a \emph{separate} piece of hardware. In the final design, none
673 of the hardware is shared between applications, even when the applied
674 function is the same (of course, if a particular value, such as the result
675 of a function application, is used twice, it is not calculated twice).
677 This is distinctly different from normal program compilation: two separate
678 calls to the same function share the same machine code. Having more
679 machine code has implications for speed (due to less efficient caching)
680 and memory usage. For normal compilation, it is therefore important to
681 keep the amount of functions limited and maximize the code sharing
682 (though there is a trade-off between speed and memory usage here).
684 When generating hardware, this is hardly an issue. Having more \quote{code
685 sharing} does reduce the amount of \small{VHDL} output (Since different
686 component instantiations still share the same component), but after
687 synthesis, the amount of hardware generated is not affected. This
688 means there is no trade-off between speed and memory (or rather,
691 In particular, if we would duplicate all functions so that there is a
692 separate function for every application in the program (\eg, each function
693 is then only applied exactly once), there would be no increase in hardware
696 Because of this, a common optimization technique called
697 \emph{specialization} can be applied to hardware generation without any
698 performance or area cost (unlike for software).
700 \fxnote{Perhaps these next three sections are a bit too
701 implementation-oriented?}
703 \subsection{Specialization}
704 \defref{specialization}
705 Given some function that has a \emph{domain} $D$ (\eg, the set of
706 all possible arguments that the function could be applied to), we
707 create a specialized function with exactly the same behavior, but
708 with a domain $D' \subset D$. This subset can be chosen in all
709 sorts of ways. Any subset is valid for the general definition of
710 specialization, but in practice only some of them provide useful
711 optimization opportunities.
713 Common subsets include limiting a polymorphic argument to a single type
714 (\ie, removing polymorphism) or limiting an argument to just a single
715 value (\ie, cross-function constant propagation, effectively removing
718 Since we limit the argument domain of the specialized function, its
719 definition can often be optimized further (since now more types or even
720 values of arguments are already known). By replacing any application of
721 the function that falls within the reduced domain by an application of
722 the specialized version, the code gets faster (but the code also gets
723 bigger, since we now have two versions instead of one). If we apply
724 this technique often enough, we can often replace all applications of a
725 function by specialized versions, allowing the original function to be
726 removed (in some cases, this can even give a net reduction of the code
727 compared to the non-specialized version).
729 Specialization is useful for our hardware descriptions for functions
730 that contain arguments that cannot be translated to hardware directly
731 (polymorphic or higher-order arguments, for example). If we can create
732 specialized functions that remove the argument, or make it translatable,
733 we can use specialization to make the original, untranslatable, function
736 \section{Higher order values}
737 What holds for partial application, can be easily generalized to any
738 higher-order expression. This includes partial applications, plain
739 variables (e.g., a binder referring to a top level function), lambda
740 expressions and more complex expressions with a function type (a \hs{case}
741 expression returning lambda's, for example).
743 Each of these values cannot be directly represented in hardware (just like
744 partial applications). Also, to make them representable, they need to be
745 applied: function variables and partial applications will then eventually
746 become complete applications, applied lambda expressions disappear by
747 applying β-reduction, etc.
749 So any higher-order value will be \quote{pushed down} towards its
750 application just like partial applications. Whenever a function boundary
751 needs to be crossed, the called function can be specialized.
753 \fxnote{This section needs improvement and an example}
755 \section{Polymorphism}
756 In Haskell, values can be \emph{polymorphic}: they can have multiple types. For
757 example, the function \hs{fst :: (a, b) -> a} is an example of a
758 polymorphic function: it works for tuples with any two element types. Haskell
759 type classes allow a function to work on a specific set of types, but the
760 general idea is the same. The opposite of this is a \emph{monomorphic}
761 value, which has a single, fixed, type.
763 % A type class is a collection of types for which some operations are
764 % defined. It is thus possible for a value to be polymorphic while having
765 % any number of \emph{class constraints}: the value is not defined for
766 % every type, but only for types in the type class. An example of this is
767 % the \hs{even :: (Integral a) => a -> Bool} function, which can map any
768 % value of a type that is member of the \hs{Integral} type class
770 When generating hardware, polymorphism cannot be easily translated. How
771 many wires will you lay down for a value that could have any type? When
772 type classes are involved, what hardware components will you lay down for
773 a class method (whose behavior depends on the type of its arguments)?
774 Note that Cλash currently does not allow user-defined type classes,
775 but does partly support some of the built-in type classes (like \hs{Num}).
777 Fortunately, we can again use the principle of specialization: since every
778 function application generates a separate piece of hardware, we can know
779 the types of all arguments exactly. Provided that existential typing
780 (which is a \GHC\ extension) is not used typing, all of the
781 polymorphic types in a function must depend on the types of the
782 arguments (In other words, the only way to introduce a type variable
783 is in a lambda abstraction).
785 If a function is monomorphic, all values inside it are monomorphic as
786 well, so any function that is applied within the function can only be
787 applied to monomorphic values. The applied functions can then be
788 specialized to work just for these specific types, removing the
789 polymorphism from the applied functions as well.
791 \defref{entry function}The entry function must not have a
792 polymorphic type (otherwise no hardware interface could be generated
793 for the entry function).
795 By induction, this means that all functions that are (indirectly) called
796 by our top level function (meaning all functions that are translated in
797 the final hardware) become monomorphic.
800 A very important concept in hardware designs is \emph{state}. In a
801 stateless (or, \emph{combinational}) design, every output is directly and solely dependent on the
802 inputs. In a stateful design, the outputs can depend on the history of
803 inputs, or the \emph{state}. State is usually stored in \emph{registers},
804 which retain their value during a clock cycle, and are typically updated at
805 the start of every clock cycle. Since the updating of the state is tightly
806 coupled (synchronized) to the clock signal, these state updates are often
807 called \emph{synchronous} behavior.
809 \todo{Sidenote? Registers can contain any (complex) type}
811 To make Cλash useful to describe more than simple combinational
812 designs, it needs to be able to describe state in some way.
814 \subsection{Approaches to state}
815 In Haskell, functions are always pure (except when using unsafe
816 functions with the \hs{IO} monad, which is not supported by Cλash). This
817 means that the output of a function solely depends on its inputs. If you
818 evaluate a given function with given inputs, it will always provide the
823 \startframedtext[width=8cm,background=box,frame=no]
824 \startalignment[center]
829 A function is said to be pure if it satisfies two conditions:
832 \item When a pure function is called with the same arguments twice, it should
833 return the same value in both cases.
834 \item When a pure function is called, it should have not
835 observable side-effects.
838 Purity is an important property in functional languages, since
839 it enables all kinds of mathematical reasoning and
840 optimizations with pure functions, that are not guaranteed to
841 be correct for impure functions.
843 An example of a pure function is the square root function
844 \hs{sqrt}. An example of an impure function is the \hs{today}
845 function that returns the current date (which of course cannot
846 exist like this in Haskell).
850 This is a perfect match for a combinational circuit, where the output
851 also solely depends on the inputs. However, when state is involved, this
852 no longer holds. Of course this purity constraint cannot just be
853 removed from Haskell. But even when designing a completely new (hardware
854 description) language, it does not seem to be a good idea to
855 remove this purity. This would that all kinds of interesting properties of
856 the functional language get lost, and all kinds of transformations
857 and optimizations are no longer be meaning preserving.
859 So our functions must remain pure, meaning the current state has
860 to be present in the function's arguments in some way. There seem
861 to be two obvious ways to do this: adding the current state as an
862 argument, or including the full history of each argument.
864 \subsubsection{Stream arguments and results}
865 Including the entire history of each input (\eg, the value of that
866 input for each previous clock cycle) is an obvious way to make outputs
867 depend on all previous input. This is easily done by making every
868 input a list instead of a single value, containing all previous values
869 as well as the current value.
871 An obvious downside of this solution is that on each cycle, all the
872 previous cycles must be resimulated to obtain the current state. To do
873 this, it might be needed to have a recursive helper function as well,
874 which might be hard to be properly analyzed by the compiler.
876 A slight variation on this approach is one taken by some of the other
877 functional \small{HDL}s in the field: \todo{References to Lava,
878 ForSyDe, ...} Make functions operate on complete streams. This means
879 that a function is no longer called on every cycle, but just once. It
880 takes stream as inputs instead of values, where each stream contains
881 all the values for every clock cycle since system start. This is easily
882 modeled using an (infinite) list, with one element for each clock
883 cycle. Since the function is only evaluated once, its output must also
884 be a stream. Note that, since we are working with infinite lists and
885 still want to be able to simulate the system cycle-by-cycle, this
886 relies heavily on the lazy semantics of Haskell.
888 Since our inputs and outputs are streams, all other (intermediate)
889 values must be streams. All of our primitive operators (\eg, addition,
890 subtraction, bit-wise operations, etc.) must operate on streams as
891 well (note that changing a single-element operation to a stream
892 operation can done with \hs{map}, \hs{zipwith}, etc.).
894 This also means that primitive operations from an existing
895 language such as Haskell cannot be used directly (including some
896 syntax constructs, like \hs{case} expressions and \hs{if}
897 expressions). This makes this approach well suited for use in
898 \small{EDSL}s, since those already impose these same
899 limitations. \refdef{EDSL}
901 Note that the concept of \emph{state} is no more than having some way
902 to communicate a value from one cycle to the next. By introducing a
903 \hs{delay} function, we can do exactly that: delay (each value in) a
904 stream so that we can "look into" the past. This \hs{delay} function
905 simply outputs a stream where each value is the same as the input
906 value, but shifted one cycle. This causes a \quote{gap} at the
907 beginning of the stream: what is the value of the delay output in the
908 first cycle? For this, the \hs{delay} function has a second input, of
909 which only a single value is used.
911 \in{Example}[ex:DelayAcc] shows a simple accumulator expressed in this
914 \startbuffer[DelayAcc]
915 acc :: Stream Word -> Stream Word
918 out = (delay out 0) + in
921 \startuseMPgraphic{DelayAcc}
922 save in, out, add, reg;
925 newCircle.in(btex $in$ etex) "framed(false)";
926 newCircle.out(btex $out$ etex) "framed(false)";
929 newReg.reg("") "dx(4mm)", "dy(6mm)", "reflect(true)";
930 newCircle.add(btex + etex);
933 add.c = in.c + (2cm, 0cm);
934 out.c = add.c + (2cm, 0cm);
935 reg.c = add.c + (0cm, 2cm);
937 % Draw objects and lines
938 drawObj(in, out, add, reg);
940 nccurve(add)(reg) "angleA(0)", "angleB(180)", "posB(d)";
941 nccurve(reg)(add) "angleA(180)", "angleB(-45)", "posA(out)";
947 \placeexample[][ex:DelayAcc]{Simple accumulator architecture.}
948 \startcombination[2*1]
949 {\typebufferhs{DelayAcc}}{Haskell description using streams.}
950 {\boxedgraphic{DelayAcc}}{The architecture described by the Haskell description.}
954 This notation can be confusing (especially due to the loop in the
955 definition of out), but is essentially easy to interpret. There is a
956 single call to delay, resulting in a circuit with a single register,
957 whose input is connected to \hs{out} (which is the output of the
958 adder), and its output is the expression \hs{delay out 0} (which is
959 connected to one of the adder inputs).
961 \subsubsection{Explicit state arguments and results}
962 A more explicit way to model state, is to simply add an extra argument
963 containing the current state value. This allows an output to depend on
964 both the inputs as well as the current state while keeping the
965 function pure (letting the result depend only on the arguments), since
966 the current state is now an argument.
968 In Haskell, this would look like
969 \in{example}[ex:ExplicitAcc]\footnote[notfinalsyntax]{This
970 example is not in the final Cλash syntax}. \todo{Referencing
971 notfinalsyntax from Introduction.tex doesn't work}
973 \startbuffer[ExplicitAcc]
974 -- input -> current state -> (new state, output)
975 acc :: Word -> Word -> (Word, Word)
982 \placeexample[][ex:ExplicitAcc]{Simple accumulator architecture.}
983 \startcombination[2*1]
984 {\typebufferhs{ExplicitAcc}}{Haskell description using explicit state arguments.}
985 % Picture is identical to the one we had just now.
986 {\boxedgraphic{DelayAcc}}{The architecture described by the Haskell description.}
989 This approach makes a function's state very explicit, which state
990 variables are used by a function can be completely determined from its
991 type signature (as opposed to the stream approach, where a function
992 looks the same from the outside, regardless of what state variables it
993 uses or whether it is stateful at all).
995 This approach to state has been one of the initial drives behind
996 this research. Unlike a stream based approach it is well suited
997 to completely use existing code and language features (like
998 \hs{if} and \hs{case} expressions) because it operates on normal
999 values. Because of these reasons, this is the approach chosen
1000 for Cλash. It will be examined more closely below.
1002 \subsection{Explicit state specification}
1003 The concept of explicit state has been introduced with some
1004 examples above, but what are the implications of this approach?
1006 \subsubsection{Substates}
1007 Since a function's state is reflected directly in its type signature,
1008 if a function calls other stateful functions (\eg, has sub-circuits), it
1009 has to somehow know the current state for these called functions. The
1010 only way to do this, is to put these \emph{substates} inside the
1011 caller's state. This means that a function's state is the sum of the
1012 states of all functions it calls, and its own state. This sum
1013 can be obtained using something simple like a tuple, or possibly
1014 custom algebraic types for clarity.
1016 This also means that the type of a function (at least the "state"
1017 part) is dependent on its own implementation and of the functions it
1020 This is the major downside of this approach: the separation between
1021 interface and implementation is limited. However, since Cλash is not
1022 very suitable for separate compilation this is not a big problem
1025 Additionally, when using a type synonym for the state type
1026 of each function, we can still provide explicit type signatures
1027 while keeping the state specification for a function near its
1031 \subsubsection{Which arguments and results are stateful?}
1032 \fxnote{This section should get some examples}
1033 We need some way to know which arguments should become input ports and
1034 which argument(s?) should become the current state (\eg, be bound to
1035 the register outputs). This does not hold just for the top
1036 level function, but also for any sub-function. Or could we perhaps
1037 deduce the statefulness of sub-functions by analyzing the flow of data
1038 in the calling functions?
1040 To explore this matter, the following observation is interesting: we
1041 get completely correct behavior when we put all state registers in
1042 the top level entity (or even outside of it). All of the state
1043 arguments and results on sub-functions are treated as normal input and
1044 output ports. Effectively, a stateful function results in a stateless
1045 hardware component that has one of its input ports connected to the
1046 output of a register and one of its output ports connected to the
1047 input of the same register.
1051 Of course, even though the hardware described like this has the
1052 correct behavior, unless the layout tool does smart optimizations,
1053 there will be a lot of extra wire in the design (since registers will
1054 not be close to the component that uses them). Also, when working with
1055 the generated \small{VHDL} code, there will be a lot of extra ports
1056 just to pass on state values, which can get quite confusing.
1058 To fix this, we can simply \quote{push} the registers down into the
1059 sub-circuits. When we see a register that is connected directly to a
1060 sub-circuit, we remove the corresponding input and output port and put
1061 the register inside the sub-circuit instead. This is slightly less
1062 trivial when looking at the Haskell code instead of the resulting
1063 circuit, but the idea is still the same.
1067 However, when applying this technique, we might push registers down
1068 too far. When you intend to store a result of a stateless sub-function
1069 in the caller's state and pass the current value of that state
1070 variable to that same function, the register might get pushed down too
1071 far. It is impossible to distinguish this case from similar code where
1072 the called function is in fact stateful. From this we can conclude
1073 that we have to either:
1075 \todo{Example of wrong downpushing}
1078 \item accept that the generated hardware might not be exactly what we
1079 intended, in some specific cases. In most cases, the hardware will be
1081 \item explicitly annotate state arguments and results in the input
1085 The first option causes (non-obvious) exceptions in the language
1086 interpretation. Also, automatically determining where registers should
1087 end up is easier to implement correctly with explicit annotations, so
1088 for these reasons we will look at how this annotations could work.
1090 \todo{Sidenote: one or more state arguments?}
1092 \subsection[sec:description:stateann]{Explicit state annotation}
1093 To make our stateful descriptions unambiguous and easier to translate,
1094 we need some way for the developer to describe which arguments and
1095 results are intended to become stateful.
1097 Roughly, we have two ways to achieve this:
1099 \item Use some kind of annotation method or syntactic construction in
1100 the language to indicate exactly which argument and (part of the)
1101 result is stateful. This means that the annotation lives
1102 \quote{outside} of the function, it is completely invisible when
1103 looking at the function body.
1104 \item Use some kind of annotation on the type level, \ie\ give stateful
1105 arguments and stateful (parts of) results a different type. This has the
1106 potential to make this annotation visible inside the function as well,
1107 such that when looking at a value inside the function body you can
1108 tell if it is stateful by looking at its type. This could possibly make
1109 the translation process a lot easier, since less analysis of the
1110 program flow might be required.
1113 From these approaches, the type level \quote{annotations} have been
1114 implemented in Cλash. \in{Section}[sec:prototype:statetype] expands on
1115 the possible ways this could have been implemented.
1117 \todo{Note about conditions on state variables and checking them}
1119 \section[sec:recursion]{Recursion}
1120 An important concept in functional languages is recursion. In its most basic
1121 form, recursion is a definition that is described in terms of itself. A
1122 recursive function is thus a function that uses itself in its body. This
1123 usually requires multiple evaluations of this function, with changing
1124 arguments, until eventually an evaluation of the function no longer requires
1127 Given the notion that each function application will translate to a
1128 component instantiation, we are presented with a problem. A recursive
1129 function would translate to a component that contains itself. Or, more
1130 precisely, that contains an instance of itself. This instance would again
1131 contain an instance of itself, and again, into infinity. This is obviously a
1132 problem for generating hardware.
1134 This is expected for functions that describe infinite recursion. In that
1135 case, we cannot generate hardware that shows correct behavior in a single
1136 cycle (at best, we could generate hardware that needs an infinite number of
1137 cycles to complete).
1140 \startframedtext[width=8cm,background=box,frame=no]
1141 \startalignment[center]
1142 {\tfa \hs{null}, \hs{head} and \hs{tail}}
1145 The functions \hs{null}, \hs{head} and \hs{tail} are common list
1146 functions in Haskell. The \hs{null} function simply checks if a list is
1147 empty. The \hs{head} function returns the first element of a list. The
1148 \hs{tail} function returns containing everything \emph{except} the first
1151 In Cλash, there are vector versions of these functions, which do exactly
1156 However, most recursive definitions will describe finite
1157 recursion. This is because the recursive call is done conditionally. There
1158 is usually a \hs{case} expression where at least one alternative does not contain
1159 the recursive call, which we call the "base case". If, for each call to the
1160 recursive function, we would be able to detect at compile time which
1161 alternative applies, we would be able to remove the \hs{case} expression and
1162 leave only the base case when it applies. This will ensure that expanding
1163 the recursive functions will terminate after a bounded number of expansions.
1165 This does imply the extra requirement that the base case is detectable at
1166 compile time. In particular, this means that the decision between the base
1167 case and the recursive case must not depend on run-time data.
1169 \subsection{List recursion}
1170 The most common deciding factor in recursion is the length of a list that is
1171 passed in as an argument. Since we represent lists as vectors that encode
1172 the length in the vector type, it seems easy to determine the base case. We
1173 can simply look at the argument type for this. However, it turns out that
1174 this is rather non-trivial to write down in Haskell already, not even
1175 looking at translation. As an example, we would like to write down something
1179 sum :: Vector n Word -> Word
1180 sum xs = case null xs of
1182 False -> head xs + sum (tail xs)
1185 However, the Haskell type-checker will now use the following reasoning.
1186 For simplicity, the element type of a vector is left out, all vectors
1187 are assumed to have the same element type. Below, we write conditions
1188 on type variables before the \hs{=>} operator. This is not completely
1189 valid Haskell syntax, but serves to illustrate the type-checker
1190 reasoning. Also note that a vector can never have a negative length,
1191 so \hs{Vector n} implicitly means \hs{(n >= 0) => Vector n}.
1193 \todo{This type-checker disregards the type signature}
1195 \item tail has the type \hs{(n > 0) => Vector n -> Vector (n - 1)}
1196 \item This means that xs must have the type \hs{(n > 0) => Vector n}
1197 \item This means that sum must have the type \hs{(n > 0) => Vector n -> a}
1198 (The type \hs{a} is can be anything at this stage, we will not try to finds
1199 its actual type in this example).
1200 \item sum is called with the result of tail as an argument, which has the
1201 type \hs{Vector n} (since \hs{(n > 0) => Vector (n - 1)} is the same as \hs{(n >= 0)
1202 => Vector n}, which is the same as just \hs{Vector n}).
1203 \item This means that sum must have the type \hs{Vector n -> a}
1204 \item This is a contradiction between the type deduced from the body of sum
1205 (the input vector must be non-empty) and the use of sum (the input vector
1206 could have any length).
1209 As you can see, using a simple \hs{case} expression at value level causes
1210 the type checker to always type-check both alternatives, which cannot be
1211 done. The type-checker is unable to distinguish the two case
1212 alternatives (this is partly possible using \small{GADT}s, but that
1213 approach faced other problems \todo{ref christiaan?}).
1215 This is a fundamental problem, that would seem perfectly suited for a
1216 type class. Considering that we need to switch between to
1217 implementations of the sum function, based on the type of the
1218 argument, this sounds like the perfect problem to solve with a type
1219 class. However, this approach has its own problems (not the least of
1220 them that you need to define a new type class for every recursive
1221 function you want to define).
1223 \todo{This should reference Christiaan}
1225 \subsection{General recursion}
1226 Of course there are other forms of recursion, that do not depend on the
1227 length (and thus type) of a list. For example, simple recursion using a
1228 counter could be expressed, but only translated to hardware for a fixed
1229 number of iterations. Also, this would require extensive support for compile
1230 time simplification (constant propagation) and compile time evaluation
1231 (evaluation of constant comparisons), to ensure non-termination.
1232 Supporting general recursion will probably require strict conditions
1233 on the input descriptions. Even then, it will be hard (if not
1234 impossible) to really guarantee termination, since the user (or \GHC\
1235 desugarer) might use some obscure notation that results in a corner
1236 case of the simplifier that is not caught and thus non-termination.
1238 Evaluating all possible (and non-possible) ways to add recursion to
1239 our descriptions, it seems better to limit the scope of this research
1240 to exclude recursion. This allows for focusing on other interesting
1241 areas instead. By including (built-in) support for a number of
1242 higher-order functions like \hs{map} and \hs{fold}, we can still
1243 express most of the things we would use (list) recursion for.
1246 % vim: set sw=2 sts=2 expandtab: