boilerplate away. This would incur no flexibility cost at all, since there are
no other ways that would work.
boilerplate away. This would incur no flexibility cost at all, since there are
no other ways that would work.
notation. This is meant to simplify Monad notation by hiding away some
details. It allows one to write a list of expressions, which are composited
using the monadic \emph{bind} operator, written in Haskell as \hs{>>}. For
notation. This is meant to simplify Monad notation by hiding away some
details. It allows one to write a list of expressions, which are composited
using the monadic \emph{bind} operator, written in Haskell as \hs{>>}. For
There is also the \hs{>>=} operator, which allows for passing variables from
one expression to the next. If we could use this notation to compose a
stateful computation from a number of other stateful functions, this could
There is also the \hs{>>=} operator, which allows for passing variables from
one expression to the next. If we could use this notation to compose a
stateful computation from a number of other stateful functions, this could
descriptions: We can use the language itself to provide abstractions of common
patterns, making our code smaller.
descriptions: We can use the language itself to provide abstractions of common
patterns, making our code smaller.
However, simply using the monad notation is not as easy as it sounds. The main
problem is that the Monad type class poses a number of limitations on the
bind operator \hs{>>}. Most importantly, it has the following type signature:
However, simply using the monad notation is not as easy as it sounds. The main
problem is that the Monad type class poses a number of limitations on the
bind operator \hs{>>}. Most importantly, it has the following type signature:
definitions, we could have writting \in{example}[ex:NestedState] a lot
shorter, see \in{example}[ex:DoState]. In this example the type signature of
foo is the same (though it is now written using the \hs{Stateful} type
definitions, we could have writting \in{example}[ex:NestedState] a lot
shorter, see \in{example}[ex:DoState]. In this example the type signature of
foo is the same (though it is now written using the \hs{Stateful} type
FooState -> (FooState, Word)}.
Note that the \hs{FooState} type has changed (so indirectly the type of
FooState -> (FooState, Word)}.
Note that the \hs{FooState} type has changed (so indirectly the type of
type FooState = ( AState, (BState, ()) )
foo :: Word -> Stateful FooState Word
foo in = do
type FooState = ( AState, (BState, ()) )
foo :: Word -> Stateful FooState Word
foo in = do
two functions (components) in two directions. For most Monad instances, this
is a requirement, but here it could have been different.
two functions (components) in two directions. For most Monad instances, this
is a requirement, but here it could have been different.
the best solution here. However, it does show that using fairly simple
abstractions, we could hide a lot of the boilerplate code. Extending
\small{GHC} with some new syntax sugar similar to the do notation might be a
the best solution here. However, it does show that using fairly simple
abstractions, we could hide a lot of the boilerplate code. Extending
\small{GHC} with some new syntax sugar similar to the do notation might be a
\section[sec:future:pipelining]{Improved notation or abstraction for pipelining}
Since pipelining is a very common optimization for hardware systems, it should
\section[sec:future:pipelining]{Improved notation or abstraction for pipelining}
Since pipelining is a very common optimization for hardware systems, it should
into an otherwise regular combinatoric system, we might look for some way to
abstract away some of the boilerplate for pipelining.
into an otherwise regular combinatoric system, we might look for some way to
abstract away some of the boilerplate for pipelining.
This problem is slightly more complex than the problem we've seen before. One
significant difference is that each variable that crosses a stage boundary
This problem is slightly more complex than the problem we've seen before. One
significant difference is that each variable that crosses a stage boundary
it must be stored for a longer period and should receive multiple registers.
Since we can't find out from the combinator code where the result of the
combined values is used (at least not without using Template Haskell to
it must be stored for a longer period and should receive multiple registers.
Since we can't find out from the combinator code where the result of the
combined values is used (at least not without using Template Haskell to
This produces cumbersome code, where there is still a lot of explicitness
(though this could be hidden in syntax sugar).
This produces cumbersome code, where there is still a lot of explicitness
(though this could be hidden in syntax sugar).
\item Scope each variable over every subsequent pipeline stage and allocate
the maximum amount of registers that \emph{could} be needed. This means we
will allocate registers that are never used, but those could be optimized
\item Scope each variable over every subsequent pipeline stage and allocate
the maximum amount of registers that \emph{could} be needed. This means we
will allocate registers that are never used, but those could be optimized
\section{Recursion}
The main problems of recursion have been described in
\in{section}[sec:recursion]. In the current implementation, recursion is
\section{Recursion}
The main problems of recursion have been described in
\in{section}[sec:recursion]. In the current implementation, recursion is
builtin functions.
Since recursion is a very important and central concept in functional
programming, it would very much improve the flexibility and elegance of our
builtin functions.
Since recursion is a very important and central concept in functional
programming, it would very much improve the flexibility and elegance of our
possible, though it will still stay a challenge. Further advances in
dependent typing support for Haskell will probably help here as well.
possible, though it will still stay a challenge. Further advances in
dependent typing support for Haskell will probably help here as well.
-TODO: Reference Christiaan and other type-level work
-(http://personal.cis.strath.ac.uk/conor/pub/she/)
+\todo{Reference Christiaan and other type-level work
+(http://personal.cis.strath.ac.uk/conor/pub/she/)}
\item For all recursion, there is the obvious challenge of deciding when
recursion is finished. For list recursion, this might be easier (Since the
base case of the recursion influences the type signatures). For general
recursion, this requires a complete set of simplification and evaluation
transformations to prevent infinite expansion. The main challenge here is how
to make this set complete, or at least define the constraints on possible
\item For all recursion, there is the obvious challenge of deciding when
recursion is finished. For list recursion, this might be easier (Since the
base case of the recursion influences the type signatures). For general
recursion, this requires a complete set of simplification and evaluation
transformations to prevent infinite expansion. The main challenge here is how
to make this set complete, or at least define the constraints on possible
Cλash, currently). Since every function in Cλash describes the behaviour on
each cycle boundary, we really can't fit in asynchronous behaviour easily.
Cλash, currently). Since every function in Cλash describes the behaviour on
each cycle boundary, we really can't fit in asynchronous behaviour easily.
currently no way for the compiler to know in which clock domain a function
should operate and since the clock signal is never explicit, there is also no
way to express circuits that synchronize various clock domains.
currently no way for the compiler to know in which clock domain a function
should operate and since the clock signal is never explicit, there is also no
way to express circuits that synchronize various clock domains.
functions more generic event handlers, where the system generates a stream of
events (Like \quote{clock up}, \quote{clock down}, \quote{input A changed},
\quote{reset}, etc.). When working with multiple clock domains, each domain
functions more generic event handlers, where the system generates a stream of
events (Like \quote{clock up}, \quote{clock down}, \quote{input A changed},
\quote{reset}, etc.). When working with multiple clock domains, each domain
In this example, we see that every function takes an input of type
\hs{Event}. The function \hs{main} that takes the output of
In this example, we see that every function takes an input of type
\hs{Event}. The function \hs{main} that takes the output of
because they rely on the the caller to select the clock signal.
This structure is similar to the event handling structure used to perform I/O
because they rely on the the caller to select the clock signal.
This structure is similar to the event handling structure used to perform I/O
decides what to do depending on the current input event.
A slightly more complex example that shows a system with two clock domains.
decides what to do depending on the current input event.
A slightly more complex example that shows a system with two clock domains.
These options should be explored further to see if they provide feasible
methods for describing don't care conditions. Possibly there are completely
other methods which work better.
These options should be explored further to see if they provide feasible
methods for describing don't care conditions. Possibly there are completely
other methods which work better.