From: Matthijs Kooijman Date: Tue, 8 Dec 2009 21:29:10 +0000 (+0100) Subject: Fix spelling as suggested by aspell. X-Git-Tag: final-thesis~17 X-Git-Url: https://git.stderr.nl/gitweb?a=commitdiff_plain;h=a8cc8a42f676a1ab1ddca1c84b017d059610783d;p=matthijs%2Fmaster-project%2Freport.git Fix spelling as suggested by aspell. --- diff --git a/Chapters/Abstract.tex b/Chapters/Abstract.tex index 18a2b7c..798b951 100644 --- a/Chapters/Abstract.tex +++ b/Chapters/Abstract.tex @@ -3,7 +3,7 @@ Functional hardware description languages have been around for a while, but never saw adoption on a large scale. Even though advanced features like higher order functions and polymorphism could enable very natural -parameterization of hardware descriptions, the conventional hardware +parametrization of hardware descriptions, the conventional hardware description languages \VHDL\ and Verilog are still most widely used. Cλash is a new functional hardware description language using Haskell's @@ -21,8 +21,8 @@ limiting Cλash to synchronous systems with a single clock domain. A prototype compiler for Cλash has been implemented that can generate an equivalent \VHDL\ description (using mostly structural \VHDL). The -prototype uses the frontend (parser, typechecker, desugarer) of the -existing \GHC\ Haskell compiler. This frontend generates a \emph{Core} +prototype uses the front-end (parser, type-checker, desugarer) of the +existing \GHC\ Haskell compiler. This front-end generates a \emph{Core} version of the description, which is a very small typed functional language. A normalizing system of transformations brings this Core version into a normal form that has any complex parts (higher order diff --git a/Chapters/Conclusions.tex b/Chapters/Conclusions.tex index 7e76abc..66a9aa3 100644 --- a/Chapters/Conclusions.tex +++ b/Chapters/Conclusions.tex @@ -7,7 +7,7 @@ In this research, we have seen that a functional language is well suited for hardware descriptions. Function applications provide elegant notation for component instantiation and the various choice mechanisms (pattern matching, case expressions, if expressions) are well suited to describe conditional -assigment in the hardware. +assignment in the hardware. Useful features from the functional perspective, like polymorphism and higher-order functions and expressions also prove suitable to describe @@ -36,7 +36,7 @@ support for more advanced dependent types (and even type level operations) as a fundamental part of the language. The need for dependent typing is particularly present in Cλash to be able to fix some properties (list length, recursion depth, etc.) at compile time. Having better support for dependent -typing might allow the use of typesafe recursion in Cλash, though this is +typing might allow the use of type-safe recursion in Cλash, though this is fundamentally still a hard problem. The choice of describing state very explicitly as extra arguments and @@ -48,7 +48,7 @@ state can be handled just like other arguments and results. On the other hand, the explicitness of the states and in particular substates, mean that more complex descriptions can become cumbersome very quickly. One finds that dealing with unpacking, passing, receiving -and repacking becomes tedious and even errorprone. Removing some of this +and repacking becomes tedious and even error-prone. Removing some of this boilerplate would make the language even easier to use. On the whole, the usefulness of Cλash for describing hardware is not @@ -63,10 +63,10 @@ It is expected that Cλash will be used as a tool in education at the University of Twente soon. Hopefully this will provide a better insight in how the system performs. -The prototype compiler has a clear design. Its frontend is taken from the \GHC\ +The prototype compiler has a clear design. Its front-end is taken from the \GHC\ compiler and desugars Haskell into a small, but functional and typed language, called \emph{Core}. Cλash adds a transformation system that reduces -this small language to a normal form and a simple backend that performs a +this small language to a normal form and a simple back-end that performs a direct translation to \VHDL. This approach has worked well and should probably be preserved. Especially the transformation based normalization system is suitable. It is easy to implement a transformation in the prototype, though it diff --git a/Chapters/Context.tex b/Chapters/Context.tex index 7c256da..7323b5b 100644 --- a/Chapters/Context.tex +++ b/Chapters/Context.tex @@ -9,11 +9,10 @@ \blank[medium] \startcitedquotation[deursen00] - A domain-specific language (\small{DSL}) is a program- - ming language or executable specification language - that offers, through appropriate notations and ab- - stractions, expressive power focused on, and usu- - ally restricted to, a particular problem domain. + A domain-specific language (\small{DSL}) is a programming language + or executable specification language that offers, through + appropriate notations and abstractions, expressive power focused + on, and usually restricted to, a particular problem domain. \stopcitedquotation An embedded \small{DSL} is a \small{DSL} that is embedded in @@ -29,7 +28,7 @@ polymorphism, higher-order values or type classes can be used in the embedded language. \cite[hudak96] - For an \small{EDSL}, the definitions of compiletime and runtime + For an \small{EDSL}, the definitions of compile-time and run-time are typically fuzzy (and thus hard to define here), since the \small{EDSL} \quote{compiler} is usually compiled by the same Haskell compiler as the \small{EDSL} program itself. @@ -70,7 +69,7 @@ define limitations on values, making it easier to find violations and thus bugs. \item Easy to process. Functional languages have nice properties like - purity \refdef{purity} and single binding behaviour, which make it easy + purity \refdef{purity} and single binding behavior, which make it easy to apply program transformations and optimizations and could potentially simplify program verification. \stopitemize @@ -102,7 +101,7 @@ generate parts of this \small{AST}. Template Haskell is a very powerful, but also complex mechanism. - It was inteded to simplify the generation of some repetive pieces + It was intended to simplify the generation of some repetitive pieces of code, but its introspection features have inspired all sorts of applications, such as hardware description compilers. \stopframedtext @@ -113,7 +112,7 @@ specific languages. For example, an \hs{if} or \hs{case} expression is typically executed once when the Haskell description is processed and only the result is reflected in the generated - datastructure (and thus in the final program). In Lava, for + data-structure (and thus in the final program). In Lava, for example, conditional assignment can only be described by using explicit multiplexer components, not using any of Haskell's compact mechanisms (such as \hs{case} expressions or pattern diff --git a/Chapters/Future.tex b/Chapters/Future.tex index 9c7bb51..523fc0a 100644 --- a/Chapters/Future.tex +++ b/Chapters/Future.tex @@ -71,7 +71,7 @@ no other ways that would work. One particular notation in Haskell that seems promising, is the \hs{do} notation. This is meant to simplify Monad notation by hiding away some -details. It allows one to write a list of expressions, which are composited +details. It allows one to write a list of expressions, which are composed using the monadic \emph{bind} operator, written in Haskell as \hs{>>}. For example, the snippet: @@ -89,7 +89,7 @@ will be desugared into: The main reason to have the monadic notation, is to be able to wrap results of functions in a datatype (the \emph{monad}) that can contain -extra information, and hide extra behaviour in the binding operators. +extra information, and hide extra behavior in the binding operators. The \hs{>>=} operator allows extracting the actual result of a function and passing it to another function. The following snippet should @@ -161,7 +161,7 @@ means we can define our own \hs{>>} and \hs{>>=} operators, outside of the all of the Prelude) by passing \type{-XNoImplicitPrelude} to \type{ghc}. This is slightly inconvenient, but since we hardly using anything from the prelude, this is not a big problem. We might even be able to replace some of the -Prelude with hardware-translateable versions by doing this. +Prelude with hardware-translatable versions by doing this. The binding operators can now be defined exactly as they are needed. For completeness, the \hs{return} function is also defined. It is not called @@ -185,7 +185,7 @@ return r = \s -> (s, r) These definitions closely resemble the boilerplate of unpacking state, passing it to two functions and repacking the new state. With these -definitions, we could have writting \in{example}[ex:NestedState] a lot +definitions, we could have written \in{example}[ex:NestedState] a lot shorter, see \in{example}[ex:DoState]. In this example the type signature of foo is the same (though it is now written using the \hs{Stateful} type synonym, it is still completely equivalent to the original: \hs{foo :: Word -> @@ -217,7 +217,7 @@ function applications affects the state type. Fortunately, this problem can be localized by consistently using type synonyms for state types, which should prevent changes in other function's source when a function changes. -A less obvous implications of this approach is that the scope of variables +A less obvious implications of this approach is that the scope of variables produced by each of these expressions (using the \hs{<-} syntax) is limited to the expressions that come after it. This prevents values from flowing between two functions (components) in two directions. For most Monad instances, this @@ -234,7 +234,7 @@ feasible. \subsection{Arrows} Another abstraction mechanism offered by Haskell are arrows. Arrows are -a generalisation of monads \cite[hughes98], for which \GHC\ also supports +a generalization of monads \cite[hughes98], for which \GHC\ also supports some syntax sugar \cite[paterson01]. Their use for hiding away state boilerplate is not directly evident, but since arrows are a complex concept further investigation is appropriate. @@ -308,7 +308,7 @@ For this, there are two main problems to solve: \startitemize \item For list recursion, how to make a description type check? It is quite -possible that improvements in the \small{GHC} typechecker will make this +possible that improvements in the \small{GHC} type-checker will make this possible, though it will still stay a challenge. Further advances in dependent typing support for Haskell will probably help here as well. @@ -332,15 +332,15 @@ In a lot of real-world systems, both of these limitations pose problems. There might be asynchronous events to which a system wants to respond. The most obvious asynchronous event is of course a reset signal. Though a reset signal can be synchronous, that is less flexible (and a hassle to describe in -Cλash, currently). Since every function in Cλash describes the behaviour on -each cycle boundary, we really cannot fit in asynchronous behaviour easily. +Cλash, currently). Since every function in Cλash describes the behavior on +each cycle boundary, we really cannot fit in asynchronous behavior easily. Due to the same reason, multiple clock domains cannot be easily supported. There is currently no way for the compiler to know in which clock domain a function should operate and since the clock signal is never explicit, there is also no way to express circuits that synchronize various clock domains. -A possible way to express more complex timing behaviour would be to make +A possible way to express more complex timing behavior would be to make functions more generic event handlers, where the system generates a stream of events (Like \quote{clock up}, \quote{clock down}, \quote{input A changed}, \quote{reset}, etc.). When working with multiple clock domains, each domain @@ -471,10 +471,10 @@ increasing the readability. As you might also have noted, the select\_clock function takes any stateful function from the current Cλash prototype and turns it into an event-aware function! -Going along this route for more complex timing behaviour seems promising, -espicially since it seems possible to express very advanced timing behaviours, +Going along this route for more complex timing behavior seems promising, +especially since it seems possible to express very advanced timing behaviors, while still allowing simple functions without any extra overhead when complex -behaviour is not needed. +behavior is not needed. The main cost of this approach will probably be extra complexity in the compiler: the paths (state) data can take become very non-trivial, and it @@ -483,7 +483,7 @@ intended \VHDL\ description. \section{Multiple cycle descriptions} In the current Cλash prototype, every description is a single-cycle -description. In other words, every function describes behaviour for each +description. In other words, every function describes behavior for each separate cycle and any recursion (or folds, maps, etc.) is expanded in space. Sometimes, you might want to have a complex description that could possibly @@ -492,21 +492,21 @@ take multiple cycles. Some examples include: \startitemize \item Some kind of map or fold over a list that could be expanded in time instead of space. This would result in a function that describes n cycles - instead of just one, where n is the lenght of the list. + instead of just one, where n is the length of the list. \item A large combinational expressions that would introduce a very long combinational path and thus limit clock frequency. Such an expression could be broken up into multiple stages, which effectively results in a pipelined system (see also \in{section}[sec:future:pipelining]) with a known delay. There should probably be some way for the developer to specify the cycle division of the expression, since automatically deciding on such a division - is too complex and contains too many tradeoffs, at least initially. + is too complex and contains too many trade-offs, at least initially. \item Unbounded recursion. It is not possible to expand unbounded (or even recursion with a depth that is not known at compile time) in space, since there is no way to tell how much hardware to create (it might even be infinite). When expanding infinite recursion over time, each step of the recursion can - simply become a single clockcycle. When expanding bounded but unknown + simply become a single clock cycle. When expanding bounded but unknown recursion, we probably need to add an extra data valid output bit or something similar. \stopitemize @@ -530,9 +530,9 @@ function to know the current cycle without breaking all kinds of safety and purity properties. Making this function have some state input and output could help, though this state is not present in the actual hardware (or perhaps there is some state, if there are value passed from one recursion step to the -next, but how can the type of that state be determined by the typechecker?). +next, but how can the type of that state be determined by the type-checker?). -It seems that this is the most pressing problem for multicycle descriptions: +It seems that this is the most pressing problem for multi-cycle descriptions: How to interface with the rest of the system, without sacrificing safety and simulation capabilities? @@ -587,12 +587,12 @@ higher-order value at the spot where it is applied, and thus the higher-order value disappears. This approach is commonly known as the \quote{Reynolds approach to -defuntionalization}, first described by J.C. Reynolds \cite[reynolds98]\ and +defunctionalization}, first described by J.C. Reynolds \cite[reynolds98]\ and seems to apply well to this situation. One note here is that Reynolds' approach puts all the higher-order values in a single datatype. For a typed language, we will at least have to have a single datatype for each function type, since we cannot mix them. It would be even better to split these -datatypes a bit further, so that such a datatype will never hold a constructor +data-types a bit further, so that such a datatype will never hold a constructor that is never used for a particular state variable. This separation is probably a non-trivial problem, though. @@ -660,7 +660,7 @@ probably a non-trivial problem, though. value should always return a Low value. Its value does not depend on the value chosen for the don't care value. However, though when applying the above and function to \hs{Low} and \hs{undefined} results in exactly that - behviour, the result is \hs{undefined} when the arguments are swapped. + behavior, the result is \hs{undefined} when the arguments are swapped. This is because the first pattern forces the first argument to be evaluated. If it is \hs{undefined}, evaluation is halted and an exception is show, which is not what is intended. diff --git a/Chapters/HardwareDescription.tex b/Chapters/HardwareDescription.tex index f1bf5af..407559f 100644 --- a/Chapters/HardwareDescription.tex +++ b/Chapters/HardwareDescription.tex @@ -183,7 +183,7 @@ and3 a b c = and (and a b) c expressions, pattern matching and guards. An obvious way to add choice to our language without having to recognize - any of Haskell's syntax, would be to add a primivite \quote{\hs{if}} + any of Haskell's syntax, would be to add a primitive \quote{\hs{if}} function. This function would take three arguments: the condition, the value to return when the condition is true and the value to return when the condition is false. @@ -199,7 +199,7 @@ and3 a b c = and (and a b) c equality comparisons against the constructors in the \hs{case} expressions. In \in{example}[ex:Inv] two versions of an inverter are shown. The first - uses a simple \hs{case} expression, scrutinizing a boolean value. The + uses a simple \hs{case} expression, scrutinizing a Boolean value. The corresponding architecture has a comparator to determine which of the constructors is on the \hs{in} input. There is a multiplexer to select the output signal (which is just a conditional assignment in the generated @@ -396,7 +396,7 @@ and3 a b c = and (and a b) c \stopdesc \startdesc{\hs{SizedWord}, \hs{SizedInt}} These are types to represent integers. A \hs{SizedWord} is unsigned, - while a \hs{SizedInt} is signed. These types are parameterized by a + while a \hs{SizedInt} is signed. These types are parametrized by a length type, so you can define an unsigned word of 32 bits wide as follows: @@ -418,7 +418,7 @@ and3 a b c = and (and a b) c has a fixed length. It has two type parameters: its length and the type of the elements contained in it. By putting the length parameter in the type, the length of a vector can be determined - at compile time, instead of only at runtime for conventional lists. + at compile time, instead of only at run-time for conventional lists. The \hs{Vector} type constructor takes two type arguments: the length of the vector and the type of the elements contained in it. The state @@ -438,12 +438,12 @@ and3 a b c = and (and a b) c \stopdesc \startdesc{\hs{RangedWord}} This is another type to describe integers, but unlike the previous - two it has no specific bitwidth, but an upper bound. This means that + two it has no specific bit-width, but an upper bound. This means that its range is not limited to powers of two, but can be any number. A \hs{RangedWord} only has an upper bound, its lower bound is implicitly zero. There is a lot of added implementation complexity when adding a lower bound and having just an upper bound was enough - for the primary purpose of this type: typesafely indexing vectors. + for the primary purpose of this type: type-safely indexing vectors. To define an index for the 8 element vector above, we would do: @@ -465,7 +465,7 @@ and3 a b c = and (and a b) c \subsection{User-defined types} There are three ways to define new types in Haskell: algebraic - datatypes with the \hs{data} keyword, type synonyms with the \hs{type} + data-types with the \hs{data} keyword, type synonyms with the \hs{type} keyword and type renamings with the \hs{newtype} keyword. \GHC\ offers a few more advanced ways to introduce types (type families, existential typing, \small{GADT}s, etc.) which are not standard @@ -496,7 +496,7 @@ and3 a b c = and (and a b) c These types are translated to \VHDL\ record types, with one field for every field in the constructor. This translation applies to all single - constructor algebraic datatypes, including those with just one + constructor algebraic data-types, including those with just one field (which are technically not a product, but generate a VHDL record for implementation simplicity). \stopdesc @@ -504,7 +504,7 @@ and3 a b c = and (and a b) c \defref{enumerated types} An enumerated type is an algebraic datatype with multiple constructors, but none of them have fields. This is essentially a way to get an - enum-like type containing alternatives. + enumeration-like type containing alternatives. Note that Haskell's \hs{Bool} type is also defined as an enumeration type, but we have a fixed translation for that. @@ -555,7 +555,7 @@ and3 a b c = and (and a b) c Obviously, duplication detection could be used to reuse a particular field for another constructor, but this would only partially solve the problem. If two fields would be, for - example, an array of 8 bits and an 8 bit unsiged word, these are + example, an array of 8 bits and an 8 bit unsigned word, these are different types and could not be shared. However, in the final hardware, both of these types would simply be 8 bit connections, so we have a 100\% size increase by not sharing these. @@ -662,7 +662,7 @@ quadruple n = mul (mul n) \in{section}[sec:description:application]. It might mean that a partial application is passed around quite a bit (even beyond function boundaries), but eventually, the partial application will become - completely applied. An example of this principe is given in + completely applied. An example of this principle is given in \in{section}[sec:normalization:defunctionalization]. \section{Costless specialization} @@ -679,13 +679,13 @@ quadruple n = mul (mul n) machine code has implications for speed (due to less efficient caching) and memory usage. For normal compilation, it is therefore important to keep the amount of functions limited and maximize the code sharing - (though there is a tradeoff between speed and memory usage here). + (though there is a trade-off between speed and memory usage here). When generating hardware, this is hardly an issue. Having more \quote{code sharing} does reduce the amount of \small{VHDL} output (Since different component instantiations still share the same component), but after synthesis, the amount of hardware generated is not affected. This - means there is no tradeoff between speed and memory (or rather, + means there is no trade-off between speed and memory (or rather, chip area) usage. In particular, if we would duplicate all functions so that there is a @@ -704,7 +704,7 @@ quadruple n = mul (mul n) \defref{specialization} Given some function that has a \emph{domain} $D$ (\eg, the set of all possible arguments that the function could be applied to), we - create a specialized function with exactly the same behaviour, but + create a specialized function with exactly the same behavior, but with a domain $D' \subset D$. This subset can be chosen in all sorts of ways. Any subset is valid for the general definition of specialization, but in practice only some of them provide useful @@ -770,7 +770,7 @@ quadruple n = mul (mul n) When generating hardware, polymorphism cannot be easily translated. How many wires will you lay down for a value that could have any type? When type classes are involved, what hardware components will you lay down for - a class method (whose behaviour depends on the type of its arguments)? + a class method (whose behavior depends on the type of its arguments)? Note that Cλash currently does not allow user-defined type classes, but does partly support some of the built-in type classes (like \hs{Num}). @@ -801,10 +801,10 @@ quadruple n = mul (mul n) stateless (or, \emph{combinational}) design, every output is directly and solely dependent on the inputs. In a stateful design, the outputs can depend on the history of inputs, or the \emph{state}. State is usually stored in \emph{registers}, - which retain their value during a clockcycle, and are typically updated at - the start of every clockcycle. Since the updating of the state is tightly + which retain their value during a clock cycle, and are typically updated at + the start of every clock cycle. Since the updating of the state is tightly coupled (synchronized) to the clock signal, these state updates are often - called \emph{synchronous} behaviour. + called \emph{synchronous} behavior. \todo{Sidenote? Registers can contain any (complex) type} @@ -837,7 +837,7 @@ quadruple n = mul (mul n) Purity is an important property in functional languages, since it enables all kinds of mathematical reasoning and - optimizattions with pure functions, that are not guaranteed to + optimizations with pure functions, that are not guaranteed to be correct for impure functions. An example of a pure function is the square root function @@ -863,7 +863,7 @@ quadruple n = mul (mul n) \subsubsection{Stream arguments and results} Including the entire history of each input (\eg, the value of that - input for each previous clockcycle) is an obvious way to make outputs + input for each previous clock cycle) is an obvious way to make outputs depend on all previous input. This is easily done by making every input a list instead of a single value, containing all previous values as well as the current value. @@ -878,7 +878,7 @@ quadruple n = mul (mul n) ForSyDe, ...} Make functions operate on complete streams. This means that a function is no longer called on every cycle, but just once. It takes stream as inputs instead of values, where each stream contains - all the values for every clockcycle since system start. This is easily + all the values for every clock cycle since system start. This is easily modeled using an (infinite) list, with one element for each clock cycle. Since the function is only evaluated once, its output must also be a stream. Note that, since we are working with infinite lists and @@ -887,14 +887,14 @@ quadruple n = mul (mul n) Since our inputs and outputs are streams, all other (intermediate) values must be streams. All of our primitive operators (\eg, addition, - substraction, bitwise operations, etc.) must operate on streams as + subtraction, bit-wise operations, etc.) must operate on streams as well (note that changing a single-element operation to a stream operation can done with \hs{map}, \hs{zipwith}, etc.). This also means that primitive operations from an existing language such as Haskell cannot be used directly (including some syntax constructs, like \hs{case} expressions and \hs{if} - expressions). This mkes this approach well suited for use in + expressions). This makes this approach well suited for use in \small{EDSL}s, since those already impose these same limitations. \refdef{EDSL} @@ -1005,7 +1005,7 @@ acc in s = (s', out) \subsubsection{Substates} Since a function's state is reflected directly in its type signature, - if a function calls other stateful functions (\eg, has subcircuits), it + if a function calls other stateful functions (\eg, has sub-circuits), it has to somehow know the current state for these called functions. The only way to do this, is to put these \emph{substates} inside the caller's state. This means that a function's state is the sum of the @@ -1033,14 +1033,14 @@ acc in s = (s', out) We need some way to know which arguments should become input ports and which argument(s?) should become the current state (\eg, be bound to the register outputs). This does not hold just for the top - level function, but also for any subfunction. Or could we perhaps - deduce the statefulness of subfunctions by analyzing the flow of data + level function, but also for any sub-function. Or could we perhaps + deduce the statefulness of sub-functions by analyzing the flow of data in the calling functions? - To explore this matter, the following observeration is interesting: we - get completely correct behaviour when we put all state registers in + To explore this matter, the following observation is interesting: we + get completely correct behavior when we put all state registers in the top level entity (or even outside of it). All of the state - arguments and results on subfunctions are treated as normal input and + arguments and results on sub-functions are treated as normal input and output ports. Effectively, a stateful function results in a stateless hardware component that has one of its input ports connected to the output of a register and one of its output ports connected to the @@ -1049,23 +1049,23 @@ acc in s = (s', out) \todo{Example} Of course, even though the hardware described like this has the - correct behaviour, unless the layout tool does smart optimizations, + correct behavior, unless the layout tool does smart optimizations, there will be a lot of extra wire in the design (since registers will not be close to the component that uses them). Also, when working with the generated \small{VHDL} code, there will be a lot of extra ports just to pass on state values, which can get quite confusing. To fix this, we can simply \quote{push} the registers down into the - subcircuits. When we see a register that is connected directly to a - subcircuit, we remove the corresponding input and output port and put - the register inside the subcircuit instead. This is slightly less + sub-circuits. When we see a register that is connected directly to a + sub-circuit, we remove the corresponding input and output port and put + the register inside the sub-circuit instead. This is slightly less trivial when looking at the Haskell code instead of the resulting circuit, but the idea is still the same. \todo{Example} However, when applying this technique, we might push registers down - too far. When you intend to store a result of a stateless subfunction + too far. When you intend to store a result of a stateless sub-function in the caller's state and pass the current value of that state variable to that same function, the register might get pushed down too far. It is impossible to distinguish this case from similar code where @@ -1083,14 +1083,14 @@ acc in s = (s', out) \stopitemize The first option causes (non-obvious) exceptions in the language - intepretation. Also, automatically determining where registers should + interpretation. Also, automatically determining where registers should end up is easier to implement correctly with explicit annotations, so for these reasons we will look at how this annotations could work. \todo{Sidenote: one or more state arguments?} \subsection[sec:description:stateann]{Explicit state annotation} - To make our stateful descriptions unambigious and easier to translate, + To make our stateful descriptions unambiguous and easier to translate, we need some way for the developer to describe which arguments and results are intended to become stateful. @@ -1132,7 +1132,7 @@ acc in s = (s', out) problem for generating hardware. This is expected for functions that describe infinite recursion. In that - case, we cannot generate hardware that shows correct behaviour in a single + case, we cannot generate hardware that shows correct behavior in a single cycle (at best, we could generate hardware that needs an infinite number of cycles to complete). @@ -1164,7 +1164,7 @@ acc in s = (s', out) This does imply the extra requirement that the base case is detectable at compile time. In particular, this means that the decision between the base - case and the recursive case must not depend on runtime data. + case and the recursive case must not depend on run-time data. \subsection{List recursion} The most common deciding factor in recursion is the length of a list that is @@ -1182,15 +1182,15 @@ acc in s = (s', out) False -> head xs + sum (tail xs) \stophaskell - However, the Haskell typechecker will now use the following reasoning. + However, the Haskell type-checker will now use the following reasoning. For simplicity, the element type of a vector is left out, all vectors are assumed to have the same element type. Below, we write conditions on type variables before the \hs{=>} operator. This is not completely - valid Haskell syntax, but serves to illustrate the typechecker + valid Haskell syntax, but serves to illustrate the type-checker reasoning. Also note that a vector can never have a negative length, so \hs{Vector n} implicitly means \hs{(n >= 0) => Vector n}. - \todo{This typechecker disregards the type signature} + \todo{This type-checker disregards the type signature} \startitemize \item tail has the type \hs{(n > 0) => Vector n -> Vector (n - 1)} \item This means that xs must have the type \hs{(n > 0) => Vector n} @@ -1207,8 +1207,8 @@ acc in s = (s', out) \stopitemize As you can see, using a simple \hs{case} expression at value level causes - the type checker to always typecheck both alternatives, which cannot be - done. The typechecker is unable to distinguish the two case + the type checker to always type-check both alternatives, which cannot be + done. The type-checker is unable to distinguish the two case alternatives (this is partly possible using \small{GADT}s, but that approach faced other problems \todo{ref christiaan?}). diff --git a/Chapters/Introduction.tex b/Chapters/Introduction.tex index 1895080..f6e65f9 100644 --- a/Chapters/Introduction.tex +++ b/Chapters/Introduction.tex @@ -24,7 +24,7 @@ advanced types and provides a case study. As a motivating example, consider the simple functional program shown in \in{example}[ex:AndWord]\footnote[notfinalsyntax]{This example is not in the final Cλash syntax}. This is a very natural way to describe a lot of parallel not - ports, that perform a bitwise not on a bitvector. The example also shows an + ports, that perform a bit-wise not on a bit-vector. The example also shows an image of the architecture described. \startbuffer[AndWord] @@ -76,7 +76,7 @@ advanced types and provides a case study. In this example we see a recursive function \hs{sum'} that recurses over a list and takes an accumulator argument that stores the sum so far. On each - step of the recusion, another number from the input vector is added to the + step of the recursion, another number from the input vector is added to the accumulator and each intermediate step returns its result. This is a nice description of a series of sequential adders that produce @@ -204,13 +204,13 @@ advanced types and provides a case study. \stopquotation \setupquotation[style=normal,spacebefore=] - We can further split this into subquestions from a hardware perspective: + We can further split this into sub-questions from a hardware perspective: \startitemize \item How can we describe a stateful design? \item How can we describe (hierarchical) structure in a design? \stopitemize - And subquestions from a functional perspective: + And sub-questions from a functional perspective: \startitemize \item How to interpret recursion in descriptions? \item How to interpret polymorphism? @@ -264,7 +264,7 @@ hardware descriptions, Haskell. The possibilities and limits of this prototype are further explored. During the creation of the prototype, it became apparent that some structured -way of doing program transformations was required. Doing ad-hoc interpretation +way of doing program transformations was required. Doing ad hoc interpretation of the hardware description proved non-scalable. These transformations and their application are the subject of the fourth chapter. diff --git a/Chapters/Normalization.tex b/Chapters/Normalization.tex index 77dd977..8230547 100644 --- a/Chapters/Normalization.tex +++ b/Chapters/Normalization.tex @@ -224,7 +224,7 @@ -> State (Word, Word) -> (State (Word, Word), Word) - -- All arguments are an inital lambda + -- All arguments are an initial lambda -- (address, data, packed state) regbank = λa.λd.λsp. -- There are nested let expressions at top level @@ -374,7 +374,7 @@ | \italic{builtinapp} \stopbuffer - \placedefinition[][def:IntendedNormal]{Definition of the intended nnormal form using an \small{EBNF}-like syntax.} + \placedefinition[][def:IntendedNormal]{Definition of the intended normal form using an \small{EBNF}-like syntax.} {\defref{intended normal form definition} \typebufferlam{IntendedNormal}} @@ -387,7 +387,7 @@ ports are mapped to local signals (\italic{userarg}). Some of the others use a built-in construction (\eg\ the \lam{case} expression) or call a built-in function (\italic{builtinapp}) such as \lam{+} or \lam{map}. - For these, a hardcoded \small{VHDL} translation is available. + For these, a hard-coded \small{VHDL} translation is available. \section[sec:normalization:transformation]{Transformation notation} To be able to concisely present transformations, we use a specific format @@ -412,7 +412,7 @@ Nevertheless, we will more precisely specify their meaning below: \startdesc{} The expression pattern that will be matched - against (subexpressions of) the expression to be transformed. We call this a + against (sub-expressions of) the expression to be transformed. We call this a pattern, because it can contain \emph{placeholders} (variables), which match any expression or binder. Any such placeholder is said to be \emph{bound} to the expression it matches. It is convention to use an uppercase letter (\eg\ @@ -476,7 +476,7 @@ To understand this notation better, the step by step application of the η-expansion transformation to a simple \small{ALU} will be shown. Consider η-expansion, which is a common transformation from - labmda calculus, described using above notation as follows: + lambda calculus, described using above notation as follows: \starttrans E \lam{E :: a -> b} @@ -506,7 +506,7 @@ High -> (-) \stoplambda - There are a few subexpressions in this function to which we could possibly + There are a few sub-expressions in this function to which we could possibly apply the transformation. Since the pattern of the transformation is only the placeholder \lam{E}, any expression will match that. Whether the transformation applies to an expression is thus solely decided by the @@ -610,7 +610,7 @@ We look at the \lam{} pattern, which is \lam{E}. This means we bind \lam{E} to \lam{(+)}. We then replace the expression - with the \lam{}, replacing all occurences of + with the \lam{}, replacing all occurrences of \lam{E} with \lam{(+)}. In the \lam{}, the This gives us the replacement expression: \lam{λx.(+) x} (A lambda expression binding \lam{x}, with a body that applies the addition operator to \lam{x}). @@ -674,7 +674,7 @@ A \emph{hardware representable} (or just \emph{representable}) type or value is (a value of) a type that we can generate a signal for in hardware. For example, a bit, a vector of bits, a 32 bit unsigned word, etc. Values that are - not runtime representable notably include (but are not limited to): types, + not run-time representable notably include (but are not limited to): types, dictionaries, functions. \defref{representable} @@ -728,8 +728,8 @@ \stoplambda Now, we have replaced the \lam{a} binder with a reference to the \lam{x} - binder. No harm done here. But note that we see multiple occurences of the - \lam{c} binder. The first is a binding occurence, to which the second refers. + binder. No harm done here. But note that we see multiple occurrences of the + \lam{c} binder. The first is a binding occurrence, to which the second refers. The last, however refers to \emph{another} instance of \lam{c}, which is bound somewhere outside of this expression. Now, if we would apply beta reduction without taking heed of binder uniqueness, we would get: @@ -752,7 +752,7 @@ can be accessed. There are a number of ways to solve this. \small{GHC} has isolated this - problem to their binder substitution code, which performs \emph{deshadowing} + problem to their binder substitution code, which performs \emph{de-shadowing} during its expression traversal. This means that any binding that shadows another binding on a higher level is replaced by a new binder that does not shadow any other binding. This non-shadowing invariant is enough to prevent @@ -760,13 +760,13 @@ In our transformation system, maintaining this non-shadowing invariant is a bit harder to do (mostly due to implementation issues, the prototype - does not use \small{GHC}'s subsitution code). Also, the following points + does not use \small{GHC}'s substitution code). Also, the following points can be observed. \startitemize - \item Deshadowing does not guarantee overall uniqueness. For example, the + \item De-shadowing does not guarantee overall uniqueness. For example, the following (slightly contrived) expression shows the identifier \lam{x} bound in - two seperate places (and to different values), even though no shadowing + two separate places (and to different values), even though no shadowing occurs. \startlambda @@ -779,10 +779,10 @@ unique. \item When we know that all binders in an expression are unique, moving around - or removing a subexpression will never cause any binder conflicts. If we have - some way to generate fresh binders, introducing new subexpressions will not + or removing a sub-expression will never cause any binder conflicts. If we have + some way to generate fresh binders, introducing new sub-expressions will not cause any problems either. The only way to cause conflicts is thus to - duplicate an existing subexpression. + duplicate an existing sub-expression. \stopitemize Given the above, our prototype maintains a unique binder invariant. This @@ -830,7 +830,7 @@ In some of the transformations in this chapter, we need to perform substitution on an expression. Substitution means replacing every - occurence of some expression (usually a variable reference) with + occurrence of some expression (usually a variable reference) with another expression. There have been a lot of different notations used in literature for @@ -841,7 +841,7 @@ E[A=>B] \stoplambda - This means expression \lam{E} with all occurences of \lam{A} replaced + This means expression \lam{E} with all occurrences of \lam{A} replaced with \lam{B}. \stopframedtext } @@ -864,12 +864,12 @@ In our transformation system, this step helps to remove unwanted lambda abstractions (basically all but the ones at the top level). Other transformations (application propagation, non-representable inlining) make - sure that most lambda abstractions will eventually be reducable by + sure that most lambda abstractions will eventually be reducible by β-reduction. Note that β-reduction also works on type lambda abstractions and type applications as well. This means the substitution below also works on - type variables, in the case that the binder is a type variable and teh + type variables, in the case that the binder is a type variable and the expression applied to is a type. \starttrans @@ -1042,7 +1042,7 @@ This transform takes simple top level bindings generated by the \small{GHC} compiler. \small{GHC} sometimes generates very simple \quote{wrapper} bindings, which are bound to just a variable - reference, or contain just a (partial) function appliation with + reference, or contain just a (partial) function application with the type and dictionary arguments filled in (such as the \lam{(+)} in the example below). @@ -1088,9 +1088,9 @@ allowed in \VHDL\ architecture names\footnote{Technically, it is allowed to use non-alphanumerics when using extended identifiers, but it seems that none of the tooling likes - extended identifiers in filenames, so it effectively does not + extended identifiers in file names, so it effectively does not work.}, so the entity would be called \quote{w7aA7f} or - something similarly meaningless and autogenerated). + something similarly meaningless and auto-generated). \subsection{Program structure} These transformations are aimed at normalizing the overall structure @@ -1384,10 +1384,10 @@ \refdef{global variable} Note that references to \emph{global variables} (like a top level - function without arguments, but also an argumentless dataconstructors + function without arguments, but also an argumentless data-constructors like \lam{True}) are also simplified. Only local variables generate signals in the resulting architecture. Even though argumentless - dataconstructors generate constants in generated \VHDL\ code and could be + data-constructors generate constants in generated \VHDL\ code and could be mapped to an input port directly, they are still simplified to make the normal form more regular. @@ -1424,7 +1424,7 @@ reference. \item (A partial application of) a top level function (either built-in on user-defined). The function extraction transformation described in - this section takes care of turning every functiontyped argument into + this section takes care of turning every function-typed argument into (a partial application of) a top level function. \item Any expression that is not representable and does not have a function type. Since these can be any expression, there is no @@ -1433,7 +1433,7 @@ previous two categories. This means that \emph{any} Core expression that is used as an argument to a built-in function will be either transformed into one of the above categories, or end up in this - categorie. In any case, the result is in normal form. + category. In any case, the result is in normal form. \stopitemize As noted, the argument simplification will handle any representable @@ -1452,7 +1452,7 @@ translation rules needed for built-in functions, since they only need to handle (partial applications of) top level functions. - Any free variables occuring in the extracted arguments will become + Any free variables occurring in the extracted arguments will become parameters to the new global function. The original argument is replaced with a reference to the new function, applied to any free variables from the original argument. @@ -1464,7 +1464,7 @@ even more complicated expressions). \starttrans - M N \lam{M} is (a partial aplication of) a built-in function. + M N \lam{M} is (a partial application of) a built-in function. --------------------- \lam{f0 ... fn} are all free local variables of \lam{N} M (x f0 ... fn) \lam{N :: a -> b} ~ \lam{N} is not a (partial application of) a top level function @@ -1486,7 +1486,7 @@ Note that the function \lam{f} will still need normalization after this. - \subsection{Case normalisation} + \subsection{Case normalization} The transformations in this section ensure that case statements end up in normal form. @@ -1516,7 +1516,7 @@ False -> b \stopbuffer - \transexample{letflat}{Case normalisation}{from}{to} + \transexample{letflat}{Case normalization}{from}{to} \placeintermezzo{}{ @@ -1532,7 +1532,7 @@ The Haskell syntax offers the underscore as a wild binder that cannot even be referenced (It can be seen as introducing a new, - anonymous, binder everytime it is used). + anonymous, binder every time it is used). In these transformations, the term wild binder will sometimes be used to indicate that a binder must not be referenced. @@ -1600,7 +1600,7 @@ let bindings for each of the alternatives' value and a single selector case to select the right value out of these. - Technically, the defintion of this transformation would require + Technically, the definition of this transformation would require that the constructor for every alternative has exactly the same amount (\lam{m}) of arguments, but of course this transformation also applies when this is not the case. @@ -1789,7 +1789,7 @@ \startlambda f 1 2 \stoplambda - the subexpression \lam{f 1} has a function type. But this is + the sub-expression \lam{f 1} has a function type. But this is allowed, since it is inside a complete application. \stopitemize @@ -1918,7 +1918,7 @@ first argument and applies that function twice to the second argument. Again, we have made the function monomorphic for clarity, even though this function would be a lot more useful if it was polymorphic. The - function \lam{main} uses \lam{twice} to apply a lambda epression twice. + function \lam{main} uses \lam{twice} to apply a lambda expression twice. When faced with a user defined function, a body is available for that function. This means we could create a specialized version of the @@ -1955,7 +1955,7 @@ \subsubsection{Literals} There are a limited number of literals available in Haskell and Core. \refdef{enumerated types} When using (enumerating) algebraic - datatypes, a literal is just a reference to the corresponding data + data-types, a literal is just a reference to the corresponding data constructor, which has a representable type (the algebraic datatype) and can be translated directly. This also holds for literals of the \hs{Bool} Haskell type, which is just an enumerated type. @@ -1965,7 +1965,7 @@ literals for all three integer types supported (\hs{SizedWord}, \hs{SizedInt} and \hs{RangedWord}). This is implemented using Haskell's \hs{Num} type class, which offers a \hs{fromInteger} method - that converts any \hs{Integer} to the Cλash datatypes. + that converts any \hs{Integer} to the Cλash data-types. When \GHC\ sees integer literals, it will automatically insert calls to the \hs{fromInteger} method in the resulting Core expression. For @@ -2059,11 +2059,11 @@ (λb -> add b 1) (add 1 x) \stopbuffer - \transexample{nonrepinline}{Nonrepresentable binding inlining}{from}{to} + \transexample{nonrepinline}{Non-representable binding inlining}{from}{to} \subsubsection[sec:normalization:specialize]{Function specialization} This transform removes arguments to user-defined functions that are - not representable at runtime. This is done by creating a + not representable at run-time. This is done by creating a \emph{specialized} version of the function that only works for one particular value of that argument (in other words, the argument can be removed). @@ -2102,7 +2102,7 @@ exactly one free variable, itself, we would only replace that argument with itself). - This shows that any free local variables that are not runtime + This shows that any free local variables that are not run-time representable cannot be brought into normal form by this transform. We rely on an inlining or β-reduction transformation to replace such a variable with an expression we can propagate again. @@ -2136,7 +2136,7 @@ The argument that we are specializing for, \lam{Y_i}, is put inside the new function body. The old function body is applied to it. Since we use this new function only in place of an application with that - particular argument \lam{Y_i}, behaviour should not change. + particular argument \lam{Y_i}, behavior should not change. Note that the types of the arguments of our new function are taken from the types of the \emph{actual} arguments (\lam{T0 ... Tn}). This @@ -2177,7 +2177,7 @@ A possible solution to this would be to use the following alternative transformation, which is of course no longer normal β-reduction. The - followin transformation has not been tested in the prototype, but is + following transformation has not been tested in the prototype, but is given here for future reference: \starttrans @@ -2250,7 +2250,7 @@ For this particular problem, the solutions for duplication of work seem from the previous section seem to fix the determinism of our transformation system as well. However, it is likely that there are - other occurences of this problem. + other occurrences of this problem. \subsection[sec:normalization:castproblems]{Casts} We do not fully understand the use of cast expressions in Core, so @@ -2268,14 +2268,14 @@ possible to write descriptions which are in intended normal form, but cannot be translated into \VHDL\ in a meaningful way (\eg, a function that swaps two substates in its result, or a - function that changes a substate itself instead of passing it to - a subfunction). + function that changes a sub-state itself instead of passing it to + a sub-function). It is now up to the programmer to not do anything funny with these state values, whereas the normalization just tries not to mess up the flow of state values. In practice, there are situations where a Core program that \emph{could} be a valid - stateful description is not translateable by the prototype. This + stateful description is not translatable by the prototype. This most often happens when statefulness is mixed with pattern matching, causing a state input to be unpacked multiple times or be unpacked and repacked only in some of the code paths. @@ -2297,7 +2297,7 @@ When looking at the system of transformations outlined above, there are a number of questions that we can ask ourselves. The main question is of course: \quote{Does our system work as intended?}. We can split this question into a - number of subquestions: + number of sub-questions: \startitemize[KR] \item[q:termination] Does our system \emph{terminate}? Since our system will @@ -2489,7 +2489,7 @@ will of course leave the meaning unchanged and is thus \emph{sound}. - The current prototype has only been verified in an ad-hoc fashion + The current prototype has only been verified in an ad hoc fashion by inspecting (the code for) each transformation. A more formal verification would be more appropriate. @@ -2508,12 +2508,12 @@ assign a meaning to Core expressions that assumes neither. Since each of the transformations can be applied to any - subexpression as well, there is a constraint on our meaning + sub-expression as well, there is a constraint on our meaning definition: the meaning of an expression should depend only on the - meaning of subexpressions, not on the expressions themselves. For + meaning of sub-expressions, not on the expressions themselves. For example, the meaning of the application in \lam{f (let x = 4 in x)} should be the same as the meaning of the application in \lam{f - 4}, since the argument subexpression has the same meaning (though + 4}, since the argument sub-expression has the same meaning (though the actual expression is different). \subsection{Completeness} diff --git a/Chapters/Prototype.tex b/Chapters/Prototype.tex index ff24bb0..317e2dc 100644 --- a/Chapters/Prototype.tex +++ b/Chapters/Prototype.tex @@ -26,7 +26,7 @@ are convenient for describing hardware and can contain special constructs that allows our hardware descriptions to be more powerful or concise. - \item Use an existing language and create a new backend for it. This has + \item Use an existing language and create a new back-end for it. This has the advantage that existing tools can be reused, which will speed up development. \stopitemize @@ -49,7 +49,7 @@ } Considering that we required a prototype which should be working quickly, and that implementing parsers, semantic checkers and especially - typcheckers is not exactly the Core of this research (but it is lots and + type-checkers is not exactly the Core of this research (but it is lots and lots of work!), using an existing language is the obvious choice. This also has the advantage that a large set of language features is available to experiment with and it is easy to find which features apply well and @@ -117,7 +117,7 @@ \stopalignment \blank[medium] In this thesis the words \emph{translation}, \emph{compilation} and - sometimes \emph{synthesis} will be used interchangedly to refer to the + sometimes \emph{synthesis} will be used interchangeably to refer to the process of translating the hardware description from the Haskell language to the \VHDL\ language. @@ -131,7 +131,7 @@ Note that we will be using \small{VHDL} as our output language, but will not use its full expressive power. Our output will be limited to using - simple, structural descriptions, without any complex behavioural + simple, structural descriptions, without any complex behavioral descriptions like arbitrary sequential statements (which might not be supported by all tools). This ensures that any tool that works with \VHDL\ will understand our output (most tools do not support @@ -196,7 +196,7 @@ complete Haskell language and is thus a very complex one (in contrast with the Core \small{AST}, later on). All identifiers in this \small{AST} are resolved by the renamer and all types are checked by the - typechecker. + type-checker. \stopdesc \startdesc{Desugaring} This steps takes the full \small{AST} and translates it to the @@ -207,7 +207,7 @@ \stopdesc \startdesc{Simplification} Through a number of simplification steps (such as inlining, common - subexpression elimination, etc.) the Core program is simplified to make + sub-expression elimination, etc.) the Core program is simplified to make it faster or easier to process further. \stopdesc \startdesc{Backend} @@ -217,18 +217,18 @@ \stopdesc In this process, there are a number of places where we can start our work. - Assuming that we do not want to deal with (or modify) parsing, typechecking - and other frontend business and that native code is not really a useful + Assuming that we do not want to deal with (or modify) parsing, type-checking + and other front end business and that native code is not really a useful format anymore, we are left with the choice between the full Haskell \small{AST}, or the smaller (simplified) Core representation. The advantage of taking the full \small{AST} is that the exact structure of the source program is preserved. We can see exactly what the hardware description looks like and which syntax constructs were used. However, - the full \small{AST} is a very complicated datastructure. If we are to + the full \small{AST} is a very complicated data-structure. If we are to handle everything it offers, we will quickly get a big compiler. - Using the Core representation gives us a much more compact datastructure + Using the Core representation gives us a much more compact data-structure (a Core expression only uses 9 constructors). Note that this does not mean that the Core representation itself is smaller, on the contrary. Since the Core language has less constructs, most Core expressions @@ -255,7 +255,7 @@ % Create objects save inp, front, norm, vhdl, out; newEmptyBox.inp(0,0); - newBox.front(btex \small{GHC} frontend etex); + newBox.front(btex \small{GHC} front-end etex); newBox.norm(btex Normalization etex); newBox.vhdl(btex \small{VHDL} generation etex); newEmptyBox.out(0,0); @@ -282,7 +282,7 @@ \placefigure[right]{Cλash compiler pipeline}{\startboxed \useMPgraphic{clash-pipeline}\stopboxed} \startdesc{Frontend} - This is exactly the frontend from the \small{GHC} pipeline, that + This is exactly the front-end from the \small{GHC} pipeline, that translates Haskell sources to a typed Core representation. \stopdesc \startdesc{Normalization} @@ -343,7 +343,7 @@ binder name should of course be bound in a containing scope (including top level scope, so a reference to a top level function is also a variable reference). Additionally, constructors from - algebraic datatypes also become variable references. + algebraic data-types also become variable references. In our examples, binders will commonly consist of a single characters, but they can have any length. @@ -365,7 +365,7 @@ 10 \stoplambda This is a literal. Only primitive types are supported, like - chars, strings, ints and doubles. The types of these literals are the + chars, strings, integers and doubles. The types of these literals are the \quote{primitive}, unboxed versions, like \lam{Char\#} and \lam{Word\#}, not the normal Haskell versions (but there are built-in conversion functions). Without going into detail about these types, note that @@ -434,7 +434,7 @@ \stoplambda A let expression allows you to bind a binder to some value, while evaluating to some other value (for which that binder is in scope). This - allows for sharing of subexpressions (you can use a binder twice) and + allows for sharing of sub-expressions (you can use a binder twice) and explicit \quote{naming} of arbitrary expressions. A binder is not in scope in the value bound it is bound to, so it is not possible to make recursive definitions with a non-recursive let expression @@ -512,7 +512,7 @@ A case expression is the only way in Core to choose between values. All \hs{if} expressions and pattern matchings from the original Haskell - PRogram have been translated to case expressions by the desugarer. + program have been translated to case expressions by the desugarer. A case expression evaluates its scrutinee, which should have an algebraic datatype, into weak head normal form (\small{WHNF}) and @@ -525,7 +525,7 @@ This is best illustrated with an example. Assume there is an algebraic datatype declared as follows\footnote{This - datatype is not suported by the current Cλash implementation, but + datatype is not supported by the current Cλash implementation, but serves well to illustrate the case expression}: \starthaskell @@ -561,7 +561,7 @@ To support strictness, the scrutinee is always evaluated into \small{WHNF}, even when there is only a \lam{DEFAULT} alternative. This - allows aplication of the strict function \lam{f} to the argument \lam{a} + allows application of the strict function \lam{f} to the argument \lam{a} to be written like: \startlambda @@ -581,7 +581,7 @@ in Haskell. Only the constructor of an expression can be matched, complex patterns are implemented using multiple nested case expressions. - Case expressions are also used for unpacking of algebraic datatypes, even + Case expressions are also used for unpacking of algebraic data-types, even when there is only a single constructor. For examples, to add the elements of a tuple, the following Core is generated: @@ -613,8 +613,8 @@ different types (so a cast is needed) with the same representation (but no work is done by the cast). - More complex are types that are proven to be equal by the typechecker, - but look different at first glance. To ensure that, once the typechecker + More complex are types that are proven to be equal by the type-checker, + but look different at first glance. To ensure that, once the type-checker has proven equality, this information sticks around, explicit casts are added. In our notation we only write the target type, but in reality a cast expressions carries around a \emph{coercion}, which can be seen as a @@ -660,7 +660,7 @@ The type of \lam{fst} has two universally quantified type variables. When \lam{fst} is applied in \lam{fstint}, it is first applied to two types. - (which are substitued for \lam{t1} and \lam{t2} in the type of \lam{fst}, so + (which are substituted for \lam{t1} and \lam{t2} in the type of \lam{fst}, so the actual type of arguments and result of \lam{fst} can be found: \lam{fst @Int @Int :: (Int, Int) -> Int}). \stopdesc @@ -695,7 +695,7 @@ \stopframedtext } In Core, every expression is typed. The translation to Core happens - after the typechecker, so types in Core are always correct as well + after the type-checker, so types in Core are always correct as well (though you could of course construct invalidly typed expressions through the \GHC\ API). @@ -765,7 +765,7 @@ When using a value with a forall type, the actual type used must be applied first. For example Haskell expression \hs{id - True} (the function \hs{id} appleid to the dataconstructor \hs{True}) + True} (the function \hs{id} applied to the data-constructor \hs{True}) translates to the following Core: \startlambda @@ -841,7 +841,7 @@ Haskell provides type synonyms as a way to declare a new type that is equal to an existing type (or rather, a new name for an existing type). This allows both the original type and the synonym to be used - interchangedly in a Haskell program. This means no explicit conversion + interchangeably in a Haskell program. This means no explicit conversion is needed. For example, a simple accumulator would become: \starthaskell @@ -868,8 +868,8 @@ % section headings. \subsection{Type renaming (\type{newtype})} Haskell also supports type renamings as a way to declare a new type that - has the same (runtime) representation as an existing type (but is in - fact a different type to the typechecker). With type renaming, + has the same (run-time) representation as an existing type (but is in + fact a different type to the type-checker). With type renaming, explicit conversion between values of the two types is needed. The accumulator would then become: @@ -909,8 +909,8 @@ We cannot leave all these \hs{State} type constructors out, since that would change the type (unlike when using type synonyms). However, when - using type synonyms to hide away substates (see - \in{section}[sec:prototype:substatesynonyms] below), this + using type synonyms to hide away sub-states (see + \in{section}[sec:prototype:sub-statesynonyms] below), this disadvantage should be limited. \subsubsection{Different input and output types} @@ -919,7 +919,7 @@ then become something like: \starthaskell - -- These type renaminges would become part of Cλash, it is shown + -- These type renamings would become part of Cλash, it is shown -- here just for clarity. newtype StateIn s = StateIn s newtype StateOut s = StateOut s @@ -932,12 +932,12 @@ descriptions less error-prone (you can no longer \quote{forget} to unpack and repack a state variable and just return it directly, which can be a problem in the current prototype). However, it also means we - need twice as many type synonyms to hide away substates, making this + need twice as many type synonyms to hide away sub-states, making this approach a bit cumbersome. It also makes it harder to compare input and output state types, possible reducing the type-safety of the descriptions. - \subsection[sec:prototype:substatesynonyms]{Type synonyms for substates} + \subsection[sec:prototype:sub-statesynonyms]{Type synonyms for sub-states} As noted above, when using nested (hierarchical) states, the state types of the \quote{upper} functions (those that call other functions, which call other functions, etc.) quickly become complicated. Also, when the @@ -966,7 +966,7 @@ \subsubsection{Example} As an example of the used approach, a simple averaging circuit is shown in \in{example}[ex:AvgState]. This circuit lets the - accumulation of the inputs be done by a subcomponent, \hs{acc}, + accumulation of the inputs be done by a sub-component, \hs{acc}, but keeps a count of value accumulated in its own state.\footnote{Currently, the prototype is not able to compile this example, since there is no built-in function for division.} @@ -1072,7 +1072,7 @@ equal to \lam{State Word}). We also use a distinction between \emph{input} and \emph{output - (state) variables} and \emph{substate variables}, which will be + (state) variables} and \emph{sub-state variables}, which will be defined in the rules themselves. These rules describe everything that can be done with state @@ -1101,17 +1101,17 @@ If the result of this unpacking does not have a state type and does not contain state variables, there are no limitations on its use (this is the function's own state). Otherwise if it does - not have a state type but does contain substates, we refer to it + not have a state type but does contain sub-states, we refer to it as a \emph{state-containing input variable} and the limitations below apply. If it has a state type itself, we refer to it as an - \emph{input substate variable} and the below limitations apply + \emph{input sub-state variable} and the below limitations apply as well. It may seem strange to consider a variable that still has a state type directly after unpacking, but consider the case where a function does not have any state of its own, but does call a single stateful function. This means it must have a state argument that - contains just a substate. The function signature of such a function + contains just a sub-state. The function signature of such a function could look like: \starthaskell @@ -1127,8 +1127,8 @@ \stoplambda A state-containing input variable is typically a tuple containing - multiple elements (like the current function's state, substates or - more tuples containing substates). All of these can be extracted + multiple elements (like the current function's state, sub-states or + more tuples containing sub-states). All of these can be extracted from an input variable using an extractor case (or possibly multiple, when the input variable is nested). @@ -1138,25 +1138,25 @@ type but does contain state variables we refer to it as a \emph{state-containing input variable} and this limitation keeps applying. If the variable has a state type itself, we refer to - it as an \emph{input substate variable} and below limitations + it as an \emph{input sub-state variable} and below limitations apply. - \startdesc{Input substate variables can be passed to functions.} + \startdesc{Input sub-state variables can be passed to functions.} \startlambda accres = acc i accs accs' = case accres of (e, f) -> e \stoplambda - An input substate variable can (only) be passed to a function. - Additionally, every input substate variable must be used in exactly + An input sub-state variable can (only) be passed to a function. + Additionally, every input sub-state variable must be used in exactly \emph{one} application, no more and no less. The function result should contain exactly one state variable, which can be extracted using (multiple) case expressions. The extracted - state variable is referred to the \emph{output substate} + state variable is referred to the \emph{output sub-state} - The type of this output substate must be identical to the type of - the input substate passed to the function. + The type of this output sub-state must be identical to the type of + the input sub-state passed to the function. \stopdesc \startdesc{Variables can be inserted into a state-containing output variable.} @@ -1165,7 +1165,7 @@ \stoplambda A function's output state is usually a tuple containing its own - updated state variables and all output substates. This result is + updated state variables and all output sub-states. This result is built up using any single-constructor algebraic datatype (possibly nested). @@ -1179,7 +1179,7 @@ spacked' = s' ▶ State (AccState, Word) \stoplambda - As soon as all a functions own update state and output substate + As soon as all a functions own update state and output sub-state variables have been joined together, the resulting state-containing output variable can be packed into an output state variable. Packing is done by casting to a state type. @@ -1200,10 +1200,10 @@ \stopdesc There is one final limitation that is hard to express in the above - itemization. Whenever substates are extracted from the input state - to be passed to functions, the corresponding output substates + itemization. Whenever sub-states are extracted from the input state + to be passed to functions, the corresponding output sub-states should be inserted into the output state in the same way. In other - words, each pair of corresponding substates in the input and + words, each pair of corresponding sub-states in the input and output states should be passed to / returned from the same called function. @@ -1225,21 +1225,21 @@ \in{section}[sec:normalization:stateproblems]. This approach seems simple enough, but will this also work for more - complex stateful functions involving substates? Observe that any - component of a function's state that is a substate, \ie\ passed on as + complex stateful functions involving sub-states? Observe that any + component of a function's state that is a sub-state, \ie\ passed on as the state of another function, should have no influence on the hardware generated for the calling function. Any state-specific \small{VHDL} for this component can be generated entirely within the - called function. So, we can completely ignore substates when + called function. So, we can completely ignore sub-states when generating \VHDL\ for a function. From this observation it might seem logical to remove the - substates from a function's states altogether and leave only the + sub-states from a function's states altogether and leave only the state components which are actual states of the current function. While doing this would not remove any information needed to generate \small{VHDL} from the function, it would cause the function definition to become invalid (since we will not have any - substate to pass to the functions anymore). We could solve the + sub-state to pass to the functions anymore). We could solve the syntactic problems by passing \type{undefined} for state variables, but that would still break the code on the semantic level (\ie, the function would no longer be semantically @@ -1248,24 +1248,24 @@ To keep the function definition correct until the very end of the process, we will not deal with (sub)states until we get to the \small{VHDL} generation. Then, we are translating from Core to - \small{VHDL}, and we can simply generate no \VHDL for substates, + \small{VHDL}, and we can simply generate no \VHDL for sub-states, effectively removing them altogether. - But, how will we know what exactly is a substate? Since any state + But, how will we know what exactly is a sub-state? Since any state argument or return value that represents state must be of the \type{State} type, we can look at the type of a value. However, we - must be careful to ignore only \emph{substates}, and not a + must be careful to ignore only \emph{sub-states}, and not a function's own state. For \in{example}[ex:AvgStateNormal] above, we should generate a register with its output connected to \lam{s} and its input connected to \lam{s'}. However, \lam{s'} is build up from both \lam{accs'} and \lam{count'}, while only \lam{count'} should end up in the register. - \lam{accs'} is a substate for the \lam{acc} function, for which a + \lam{accs'} is a sub-state for the \lam{acc} function, for which a register will be created when generating \VHDL\ for the \lam{acc} function. - Fortunately, the \lam{accs'} variable (and any other substate) has a + Fortunately, the \lam{accs'} variable (and any other sub-state) has a property that we can easily check: it has a \lam{State} type. This means that whenever \VHDL\ is generated for a tuple (or other algebraic type), we can simply leave out all elements that have a @@ -1290,11 +1290,11 @@ new state. \item Any values of a State type should not be translated to \small{VHDL}. In particular, State elements should be removed from - tuples (and other datatypes) and arguments with a state type should + tuples (and other data-types) and arguments with a state type should not generate ports. \item To make the state actually work, a simple \small{VHDL} (sequential) process should be generated. This process updates - the state at every clockcycle, by assigning the new state to the + the state at every clock cycle, by assigning the new state to the current state. This will be recognized by synthesis tools as a register specification. \stopitemize @@ -1443,7 +1443,7 @@ a custom \GHC\ build, however. The latest version and all history of the Cλash code can be browsed - online or retrieved using the \type{git} program. + on-line or retrieved using the \type{git} program. http://git.stderr.nl/gitweb?p=matthijs/projects/cλash.git