X-Git-Url: https://git.stderr.nl/gitweb?p=matthijs%2Fmaster-project%2Freport.git;a=blobdiff_plain;f=Chapters%2FFuture.tex;h=7331e9cbea84e5ec15e51a330b1a11fa6b8493a6;hp=38d618ff89906090885a9f359ee97e36dfbfc193;hb=124f838008d9e63d36d6626ebeb453d9f83129dc;hpb=7c65fbff14123c57721d6f532c656144031b14ac diff --git a/Chapters/Future.tex b/Chapters/Future.tex index 38d618f..7331e9c 100644 --- a/Chapters/Future.tex +++ b/Chapters/Future.tex @@ -64,10 +64,10 @@ do will be desugared into: \starthaskell -(somefunc a) >>= (\\x -> otherfunc x) +(somefunc a) >>= (\x -> otherfunc x) \stophaskell -The \hs{\\x -> ...} notation creates a lambda abstraction in Haskell, +The \hs{\x -> ...} notation creates a lambda abstraction in Haskell, that binds the \hs{x} variable. Here, the \hs{>>=} operator is supposed to extract whatever result somefunc has and pass it to the lambda expression created. This will probably not make the monadic notation @@ -82,9 +82,9 @@ just moved into these functions), the compiler should still be able to compile these descriptions without any special magic (though perhaps it should always inline the binding operators to reveal the flow of values). -This is highlights an important aspect of using a functional language for our +This highlights an important aspect of using a functional language for our descriptions: We can use the language itself to provide abstractions of common -patterns, making our code smaller. +patterns, making our code smaller and easier to read. \subsection{Breaking out of the Monad} However, simply using the monad notation is not as easy as it sounds. The main @@ -105,12 +105,12 @@ type Stateful s r = s -> (s, r) (>>) :: Stateful s1 r1 -> Stateful s2 r2 -> Stateful (s1, s2) r2 \stophaskell -What we see here, is that when we compose two stateful functions (whose inputs -have already been applied, leaving just the state argument to be applied), the -result is again a stateful function whose state is composed of the two -\emph{substates}. The return value is simply the return value of the second -function, discarding the first (to preserve that one, the \hs{>>=} operator -can be used). +What we see here, is that when we compose two stateful functions (that +have already been applied to their inputs, leaving just the state +argument to be applied to), the result is again a stateful function +whose state is composed of the two \emph{substates}. The return value is +simply the return value of the second function, discarding the first (to +preserve that result, the \hs{>>=} operator can be used). There is a trick we can apply to change the signature of the \hs{>>} operator. \small{GHC} does not require the bind operators to be part of the \hs{Monad} @@ -123,11 +123,12 @@ is slightly inconvenient, but since we hardly using anything from the prelude, this is not a big problem. We might even be able to replace some of the Prelude with hardware-translateable versions by doing this. -We can now define the following binding operators. For completeness, we also -supply the return function, which is not called by \small{GHC} implicitely, -but can be called explicitely by a hardware description. +The binding operators can now be defined exactly as they are needed. For +completeness, the \hs{return} function is also defined. It is not called +by \small{GHC} implicitely, but can be called explicitely by a hardware +description. \in{Example}[ex:StateMonad] shows these definitions. -\starthaskell +\startbuffer[StateMonad] (>>) :: Stateful s1 r1 -> Stateful s2 r2 -> Stateful (s1, s2) r2 f1 >> f2 = f1 >>= \_ -> f2 @@ -138,9 +139,11 @@ f1 >>= f2 = \(s1, s2) -> let (s1', r1) = f1 s1 return :: r -> Stateful () r return r = \s -> (s, r) -\stophaskell +\stopbuffer +\placeexample[here][ex:StateMonad]{Binding operators to compose stateful computations} + {\typebufferhs{StateMonad}} -As you can see, this closely resembles the boilerplate of unpacking state, +These definitions closely resemble the boilerplate of unpacking state, passing it to two functions and repacking the new state. With these definitions, we could have writting \in{example}[ex:NestedState] a lot shorter, see \in{example}[ex:DoState]. In this example the type signature of @@ -165,7 +168,7 @@ happening. outb <- funcb outa return outb \stopbuffer -\placeexample[here][ex:DoState]{Simple function composing two stateful +\placeexample[][ex:DoState]{Simple function composing two stateful functions, using do notation.} {\typebufferhs{DoState}} @@ -189,6 +192,13 @@ abstractions, we could hide a lot of the boilerplate code. Extending \small{GHC} with some new syntax sugar similar to the do notation might be a feasible. +\subsection{Arrows} +Another abstraction mechanism offered by Haskell are arrows. Arrows are +a generalisation of monads \cite[hughes98], for which \GHC also supports +some syntax sugar \cite[paterson01]. Their use for hiding away state +boilerplate is not directly evident, but since arrows are a complex +concept further investigation is appropriate. + \section[sec:future:pipelining]{Improved notation or abstraction for pipelining} Since pipelining is a very common optimization for hardware systems, it should be easy to specify a pipelined system. Since it introduces quite some registers @@ -296,9 +306,7 @@ events (Like \quote{clock up}, \quote{clock down}, \quote{input A changed}, \quote{reset}, etc.). When working with multiple clock domains, each domain could get its own clock events. -As an example, we would have something like the following: - -\starthaskell +\startbuffer[AsyncDesc] data Event = ClockUp | Reset | ... type MainState = State Word @@ -321,30 +329,36 @@ main inp event (State acc) = (State acc', acc') -- func is some combinatoric expression func :: Word -> Event -> Word func inp _ = inp * 2 + 3 -\stophaskell +\stopbuffer + +\placeexample[][ex:AsyncDesc]{Hardware description using asynchronous events.} + {\typebufferhs{AsyncDesc}} \todo{Picture} -In this example, we see that every function takes an input of type -\hs{Event}. The function \hs{main} that takes the output of -\hs{func} and accumulates it on every clock cycle. On a reset signal, the -accumulator is reset. The function \hs{func} is just a combinatoric -function, with no synchronous elements. We can see this because there is no -state and the event input is completely ignored. If the compiler is smart -enough, we could even leave the event input out for functions that don't use -it, either because they are completely combinatoric (like in this example), or -because they rely on the the caller to select the clock signal. +\in{Example}[ex:AsyncDesc] shows a simple example of this event-based +approach. In this example we see that every function takes an input of +type \hs{Event}. The function \hs{main} that takes the output of +\hs{func} and accumulates it on every clock cycle. On a reset signal, +the accumulator is reset. The function \hs{func} is just a combinatoric +function, with no synchronous elements. We can see this because there +is no state and the event input is completely ignored. If the compiler +is smart enough, we could even leave the event input out for functions +that don't use it, either because they are completely combinatoric (like +in this example), or because they rely on the the caller to select the +clock signal. This structure is similar to the event handling structure used to perform I/O in languages like Amanda. \todo{ref} There is a top level case expression that decides what to do depending on the current input event. -A slightly more complex example that shows a system with two clock domains. -There is no real integration between the clock domains (there is one input and -one output for each clock domain), but it does show how multiple clocks can be -distinguished. +A slightly more complex example is show in +\in{example}[ex:MulticlockDesc]. It shows a system with two clock +domains. There is no real integration between the clock domains in this +example (there is one input and one output for each clock domain), but +it does show how multiple clocks can be distinguished. -\starthaskell +\startbuffer[MulticlockDesc] data Event = ClockUpA | ClockUpB | ... type MainState = State (SubAState, SubBState) @@ -376,21 +390,25 @@ suba = ... type SubBState = State ... subb :: Word -> SubAState -> (SubAState, Word) subb = ... -\stophaskell +\stopbuffer + +\placeexample[][ex:MulticlockDesc]{Hardware description with multiple clock domains through events.} + {\typebufferhs{MulticlockDesc}} -Note that in the above example the \hs{suba} and \hs{subb} functions are -\emph{always} called, to get at their combinatoric outputs. The new state -is calculated as well, but only saved when the right clock has an up -transition. +Note that in \in{example}[ex:MulticlockDesc] the \hs{suba} and \hs{subb} +functions are \emph{always} called, to get at their combinatoric +outputs. The new state is calculated as well, but only saved when the +right clock has an up transition. As you can see, there is some code duplication in the case expression that selects the right clock. One of the advantages of an explicit approach like this, is that some of this duplication can be extracted away into helper functions. For example, we could imagine a \hs{select_clock} function, which takes a stateful function that is not aware of events, and changes it into a -function that only updates its state on a specific (clock) event. +function that only updates its state on a specific (clock) event. Such a +function is shown in \in{example}[ex:SelectClock]. -\starthaskell +\startbuffer[SelectClock] select_clock :: Event -> (input -> State s -> (State s, output)) -> (input -> State s -> Event -> (State s, output)) @@ -404,7 +422,9 @@ main inpa inpb event (State (sa, sb)) = (State (sa', sb'), outa, outb) where (sa'', outa) = (select_clock ClockUpA suba) inpa sa event (sb'', outb) = (select_clock ClockUpB subb) inpb sb event -\stophaskell +\stopbuffer +\placeexample[][ex:SelectClock]{A function to filter clock events.} + {\typebufferhs{SelectClock}} As you can see, this can greatly reduce the length of the main function, while increasing the readability. As you might also have noted, the select\_clock @@ -481,9 +501,7 @@ Another interesting idea is putting a higher order value inside a function's state value. Since we can use higher order values anywhere, why not in the state? -As a (contrived) example, consider the following accumulator: - -\starthaskell +\startbuffer[HigherAccum] -- The accumulator function that takes a word and returns a new accumulator -- and the result so far. This is the function we want to put inside the -- state. @@ -493,14 +511,18 @@ acc = \a -> (\b -> acc ( a + b ), a ) main :: Word -> State Acc -> (State Acc, Word) main a s = (State s', out) where (s', out) = s a -\stophaskell - -This code uses a function as its state, which implicitely contains the value -accumulated so far. This is a fairly trivial example, that is more easy to -write with a simple \hs{Word} state value, but for more complex descriptions -this style might pay off. Note that in a way we are using the -\emph{continuation passing style} of writing functions, where we pass a -sort of \emph{continuation} from one cycle to the next. +\stopbuffer +\placeexample[][ex:HigherAccum]{An accumulator using a higher order state.} + {\typebufferhs{HigherAccum}} + +As a (contrived) example, consider the accumulator in +\in{example}[ex:HigherAccum]. This code uses a function as its state, +which implicitely contains the value accumulated so far. This is a +fairly trivial example, that is more easy to write with a simple +\hs{Word} state value, but for more complex descriptions this style +might pay off. Note that in a way we are using the \emph{continuation +passing style} of writing functions, where we pass a sort of +\emph{continuation} from one cycle to the next. Our normalization process completely removes higher order values inside a function by moving applications and higher order values around so that every