From: Matthijs Kooijman Date: Wed, 28 Oct 2009 16:42:25 +0000 (+0100) Subject: Add more content to the Hardware Description chapter. X-Git-Tag: final-thesis~188 X-Git-Url: https://git.stderr.nl/gitweb?a=commitdiff_plain;h=951d8f0c4551cbc1adc75c80ae21c86bee6d1e80;p=matthijs%2Fmaster-project%2Freport.git Add more content to the Hardware Description chapter. --- diff --git a/Chapters/HardwareDescription.tex b/Chapters/HardwareDescription.tex index 2f4545d..508c11b 100644 --- a/Chapters/HardwareDescription.tex +++ b/Chapters/HardwareDescription.tex @@ -144,6 +144,120 @@ quadruple n = mul (mul n) function boundaries), but eventually, the partial application will become completely applied. + \section{Costless specialization} + Each (complete) function application in our description generates a + component instantiation, or a specific piece of hardware in the final + design. It is interesting to note that each application of a function + generates a \emph{separate} piece of hardware. In the final design, none + of the hardware is shared between applications, even when the applied + function is the same. + + This is distinctly different from normal program compilation: Two separate + calls to the same function share the same machine code. Having more + machine code has implications for speed (due to less efficient caching) + and memory usage. For normal compilation, it is therefore important to + keep the amount of functions limited and maximize the code sharing. + + When generating hardware, this is hardly an issue. Having more \quote{code + sharing} does reduce the amount of \small{VHDL} output (Since different + component instantiations still share the same component), but after + synthesis, the amount of hardware generated is not affected. + + In particular, if we would duplicate all functions so that there is a + duplicate for every application in the program (\eg, each function is then + only applied exactly once), there would be no increase in hardware size + whatsoever. + + TODO: Perhaps these next two sections are a bit too + implementation-oriented? + + \subsection{Specialization} + Because of this, a common optimization technique called + \emph{specialization} is as good as free for hardware generation. Given + some function that has a \emph{domain} of $D$ (\eg, the set of all + possible arguments that could be applied), we create a specialized + function with exactly the same behaviour, but with an domain of $D' + \subset D$. This subset can be derived in all sort of ways, but commonly + this is done by limiting a polymorphic argument to a single type (\eg, + removing polymorphism) or by limiting an argument to just a single value + (\eg, cross-function constant propagation, effectively removing the + argument). + + Since we limit the argument domain of the specialized function, its + definition can often be optimized further (since now more types or even + values of arguments are already know). By replacing any application of + the function that falls within the reduced domain by an application of + the specialized version, the code gets faster (but the code also gets + bigger, since we now have two versions instead of one!). If we apply + this technique often enough, we can often replace all applications of a + function by specialized versions, allowing the original function to be + removed (in some cases, this can even give a net reduction of the code + compared to the non-specialized version). + + Specialization is useful for our hardware descriptions for functions + that contain arguments that cannot be translated to hardware directly + (polymorphic or higher order arguments, for example). If we can create + specialized functions that remove the argument, or make it translatable, + we can use specialization to make the original, untranslatable, function + obsolete. + + \section{Higher order values} + What holds for partial application, can be easily generalized to any + higher order expression. This includes partial applications, plain + variables (e.g., a binder referring to a top level function), lambda + expressions and more complex expressions with a function type (a case + expression returning lambda's, for example). + + Each of these values cannot be directly represented in hardware (just like + partial applications). Also, to make them representable, they need to be + applied: function variables and partial applications will then eventually + become complete applications, applied lambda expressions disappear by + applying β-reduction, etc. + + So any higher order value will be \quote{pushed down} towards its + application just like partial applications. Whenever a function boundary + needs to be crossed, the called function can be specialized. + + TODO: This is section should be improved + + \section{Polymorphism} + In Haskell, values can be polymorphic: They can have multiple types. For + example, the function \hs{fst :: (a, b) -> a} is an example of a + polymorphic function: It works for tuples with any element types. Haskell + typeclasses allow a function to work on a specific set of types, but the + general idea is the same. + +% A type class is a collection of types for which some operations are +% defined. It is thus possible for a value to be polymorphic while having +% any number of \emph{class constraints}: The value is not defined for +% every type, but only for types in the type class. An example of this is +% the \hs{even :: (Integral a) => a -> Bool} function, which can map any +% value of a type that is member of the \hs{Integral} type class + + When generating hardware, polymorphism can't be easily translated. How + many wire will you lay down for a value that could have any type? When + type classes are involved, what hardware components will you lay down for + a class method (whose behaviour depends on the type of its arguments)? + + Fortunately, we can again use the principle of specialization: Since every + function application generates separate pieces of hardware, we can know + the types of all arguments exactly. Provided that we don't use existential + typing, all of the polymorphic types in a function must depend on the + types of the arguments (In other words, the only way to introduce a type + variable is in a lambda abstraction). Our top level function must not have + a polymorphic type (otherwise we wouldn't know the hardware interface to + our top level function). + + If a function is monomorphic, all values inside it are monomorphic as + well, so any function that is applied within the function can only be + applied to monomorphic values. The applied functions can then be + specialized to work just for these specific types, removing the + polymorphism from the applied functions as well. + + By induction, this means that all functions that are (indirectly) called + by our top level function (meaning all functions that are translated in + the final hardware) become monomorphic. + \section{State} A very important concept in hardware designs is \emph{state}. In a stateless (or, \emph{combinatoric}) design, every output is a directly and solely dependent on the @@ -416,6 +530,8 @@ acc in (State s) = (State s', out) implemented in Cλash. \in{Section}[sec:prototype:statetype] expands on the possible ways this could have been implemented. + TODO: Say something about dependent types and fixed size vectors + \section[sec:recursion]{Recursion} An import concept in functional languages is recursion. In it's most basic form, recursion is a function that is defined in terms of itself. This @@ -517,3 +633,6 @@ acc in (State s) = (State s', out) Due to these complications, we leave other forms of recursion as future work as well. + + \section{Supported types} + TODO