1 \chapter[chap:future]{Future work}
2 \section{Improved notation for hierarchical state}
3 The hierarchical state model requires quite some boilerplate code for unpacking
4 and distributing the input state and collecting and repacking the output
7 \in{Example}[ex:NestedState] shows a simple composition of two stateful
8 functions, \hs{funca} and \hs{funcb}. The state annotation using the
9 \hs{State} newtype has been left out, for clarity and because the proposed
10 approach below cannot handle that (yet).
12 \startbuffer[NestedState]
13 type FooState = ( AState, BState )
14 foo :: Word -> FooState -> (FooState, Word)
18 (sa', outa) = funca in sa
19 (sb', outb) = funcb outa sb
22 \placeexample[here][ex:NestedState]{Simple function composing two stateful
23 functions.}{\typebufferhs{NestedState}}
25 Since state handling always follows strict rules, there is really no other way
26 in which this state handling can be done (you can of course move some things
27 around from the where clause to the pattern or vice versa, but it would still
28 be effectively the same thing). This means it makes extra sense to hide this
29 boilerplate away. This would incur no flexibility cost at all, since there are
30 no other ways that would work.
32 One particular notation in Haskell that seems promising, is the \hs{do}
33 notation. This is meant to simplify Monad notation by hiding away some
34 details. It allows one to write a list of expressions, which are composited
35 using the monadic \emph{bind} operator, written in Haskell as \hs{>>}. For
44 will be desugared into:
47 (somefunc a) >> (otherfunc b)
50 The main reason to have the monadic notation, is to be able to wrap
51 results of functions in a datatype (the \emph{monad}) that can contain
52 extra information, and hide extra behaviour in the binding operators.
54 The \hs{>>=} operator allows extracting the actual result of a function
55 and passing it to another function. Let's try to illustrate this from an
56 example. The following snippet:
64 will be desugared into:
67 (somefunc a) >>= (\\x -> otherfunc x)
70 The \hs{\\x -> ...} notation creates a lambda abstraction in Haskell,
71 that binds the \hs{x} variable. Here, the \hs{>>=} operator is supposed
72 to extract whatever result somefunc has and pass it to the lambda
73 expression created. This will probably not make the monadic notation
74 completely clear to a reader without prior experience with Haskell, but
75 it should serve to understand the following discussion.
77 The monadic notation could perhaps be used to compose a number of
78 stateful functions into another stateful computation. Perhaps this could
79 move all the boilerplate code into the \hs{>>} and \hs{>>=} operators.
80 Because the boilerplate is still there (it's not magically disappeared,
81 just moved into these functions), the compiler should still be able to compile
82 these descriptions without any special magic (though perhaps it should
83 always inline the binding operators to reveal the flow of values).
85 This is highlights an important aspect of using a functional language for our
86 descriptions: We can use the language itself to provide abstractions of common
87 patterns, making our code smaller.
89 \subsection{Breaking out of the Monad}
90 However, simply using the monad notation is not as easy as it sounds. The main
91 problem is that the Monad type class poses a number of limitations on the
92 bind operator \hs{>>}. Most importantly, it has the following type signature:
95 (>>) :: (Monad m) => m a -> m b -> m b
98 This means that any expression in our composition must use the same Monad
99 instance as its type, only the "return" value can be different between
102 Ideally, we would like the \hs{>>} operator to have a type like the following
104 type Stateful s r = s -> (s, r)
105 (>>) :: Stateful s1 r1 -> Stateful s2 r2 -> Stateful (s1, s2) r2
108 What we see here, is that when we compose two stateful functions (whose inputs
109 have already been applied, leaving just the state argument to be applied), the
110 result is again a stateful function whose state is composed of the two
111 \emph{substates}. The return value is simply the return value of the second
112 function, discarding the first (to preserve that one, the \hs{>>=} operator
115 There is a trick we can apply to change the signature of the \hs{>>} operator.
116 \small{GHC} does not require the bind operators to be part of the \hs{Monad}
117 type class, as long as it can use them to translate the do notation. This
118 means we can define our own \hs{>>} and \hs{>>=} operators, outside of the
119 \hs{Monad} typeclass. This does conflict with the existing methods of the
120 \hs{Monad} typeclass, so we should prevent \small{GHC} from loading those (and
121 all of the Prelude) by passing \type{-XNoImplicitPrelude} to \type{ghc}. This
122 is slightly inconvenient, but since we hardly using anything from the prelude,
123 this is not a big problem. We might even be able to replace some of the
124 Prelude with hardware-translateable versions by doing this.
126 We can now define the following binding operators. For completeness, we also
127 supply the return function, which is not called by \small{GHC} implicitely,
128 but can be called explicitely by a hardware description.
131 (>>) :: Stateful s1 r1 -> Stateful s2 r2 -> Stateful (s1, s2) r2
132 f1 >> f2 = f1 >>= \_ -> f2
134 (>>=) :: Stateful s1 r1 -> (r1 -> Stateful s2 r2) -> Stateful (s1, s2) r2
135 f1 >>= f2 = \(s1, s2) -> let (s1', r1) = f1 s1
139 return :: r -> Stateful () r
140 return r = \s -> (s, r)
143 As you can see, this closely resembles the boilerplate of unpacking state,
144 passing it to two functions and repacking the new state. With these
145 definitions, we could have writting \in{example}[ex:NestedState] a lot
146 shorter, see \in{example}[ex:DoState]. In this example the type signature of
147 foo is the same (though it is now written using the \hs{Stateful} type
148 synonym, it is still completely equivalent to the original: \hs{foo :: Word ->
149 FooState -> (FooState, Word)}.
151 Note that the \hs{FooState} type has changed (so indirectly the type of
152 \hs{foo} as well). Since the state composition by the \hs{>>} works on two
153 stateful functions at a time, the final state consists of nested two-tuples.
154 The final \hs{()} in the state originates from the fact that the \hs{return}
155 function has no real state, but is part of the composition. We could have left
156 out the return expression (and the \hs{outb <-} part) to make \hs{foo}'s return
157 value equal to \hs{funcb}'s, but this approach makes it clearer what is
160 \startbuffer[DoState]
161 type FooState = ( AState, (BState, ()) )
162 foo :: Word -> Stateful FooState Word
168 \placeexample[here][ex:DoState]{Simple function composing two stateful
169 functions, using do notation.}
170 {\typebufferhs{DoState}}
172 An important implication of this approach is that the order of writing
173 function applications affects the state type. Fortunately, this problem can be
174 localized by consistently using type synonyms for state types, which should
175 prevent changes in other function's source when a function changes.
177 A less obvous implications of this approach is that the scope of variables
178 produced by each of these expressions (using the \hs{<-} syntax) is limited to
179 the expressions that come after it. This prevents values from flowing between
180 two functions (components) in two directions. For most Monad instances, this
181 is a requirement, but here it could have been different.
183 \todo{Add examples or reference for state synonyms}
185 \subsection{Alternative syntax}
186 Because of these typing issues, misusing Haskell's do notation is probably not
187 the best solution here. However, it does show that using fairly simple
188 abstractions, we could hide a lot of the boilerplate code. Extending
189 \small{GHC} with some new syntax sugar similar to the do notation might be a
192 \section[sec:future:pipelining]{Improved notation or abstraction for pipelining}
193 Since pipelining is a very common optimization for hardware systems, it should
194 be easy to specify a pipelined system. Since it introduces quite some registers
195 into an otherwise regular combinatoric system, we might look for some way to
196 abstract away some of the boilerplate for pipelining.
198 Something similar to the state boilerplate removal above might be appropriate:
199 Abstract away some of the boilerplate code using combinators, then hide away
200 the combinators in special syntax. The combinators will be slightly different,
201 since there is a (typing) distinction between a pipeline stage and a pipeline
202 consisting of multiple stages. Also, it seems necessary to treat either the
203 first or the last pipeline stage differently, to prevent an off-by-one error
204 in the amount of registers (which is similar to the extra \hs{()} state type
205 in \in{example}[ex:DoState], which is harmless there, but would be a problem
206 if it introduced an extra, useless, pipeline stage).
208 This problem is slightly more complex than the problem we've seen before. One
209 significant difference is that each variable that crosses a stage boundary
210 needs a register. However, when a variable crosses multiple stage boundaries,
211 it must be stored for a longer period and should receive multiple registers.
212 Since we can't find out from the combinator code where the result of the
213 combined values is used (at least not without using Template Haskell to
214 inspect the \small{AST}), there seems to be no easy way to find how much
215 registers are needed.
217 There seem to be two obvious ways of handling this problem:
220 \item Limit the scoping of each variable produced by a stage to the next
221 stage only. This means that any variables that are to be used in subsequent
222 stages should be passed on explicitely, which should allocate the required
225 This produces cumbersome code, where there is still a lot of explicitness
226 (though this could be hidden in syntax sugar).
227 \todo{The next sentence is unclear}
228 \item Scope each variable over every subsequent pipeline stage and allocate
229 the maximum amount of registers that \emph{could} be needed. This means we
230 will allocate registers that are never used, but those could be optimized
231 away later. This does mean we need some way to introduce a variable number
232 of variables (depending on the total number of stages), assign the output of
233 a different register to each (\eg., a different part of the state) and scope
234 a different one of them over each the subsequent stages.
236 This also means that when adding a stage to an existing pipeline will change
237 the state type of each of the subsequent pipeline stages, and the state type
238 ofthe added stage depends on the number of subsequent stages.
240 Properly describing this will probably also require quite explicit syntax,
241 meaning this is not feasible without some special syntax.
244 Some other interesting issues include pipeline stages which are already
245 stateful, mixing pipelined with normal computation, etc.
248 The main problems of recursion have been described in
249 \in{section}[sec:recursion]. In the current implementation, recursion is
250 therefore not possible, instead we rely on a number of implicitly list-recursive
253 Since recursion is a very important and central concept in functional
254 programming, it would very much improve the flexibility and elegance of our
255 hardware descriptions if we could support (full) recursion.
257 For this, there are two main problems to solve:
260 \item For list recursion, how to make a description type check? It is quite
261 possible that improvements in the \small{GHC} typechecker will make this
262 possible, though it will still stay a challenge. Further advances in
263 dependent typing support for Haskell will probably help here as well.
265 \todo{Reference Christiaan and other type-level work
266 (http://personal.cis.strath.ac.uk/conor/pub/she/)}
267 \item For all recursion, there is the obvious challenge of deciding when
268 recursion is finished. For list recursion, this might be easier (Since the
269 base case of the recursion influences the type signatures). For general
270 recursion, this requires a complete set of simplification and evaluation
271 transformations to prevent infinite expansion. The main challenge here is how
272 to make this set complete, or at least define the constraints on possible
273 recursion that guarantee it will work.
275 \todo{Reference Christian for loop unrolling?}
278 \section{Multiple clock domains and asynchronicity}
279 Cλash currently only supports synchronous systems with a single clock domain.
280 In a lot of real-world systems, both of these limitations pose problems.
282 There might be asynchronous events to which a system wants to respond. The
283 most obvious asynchronous event is of course a reset signal. Though a reset
284 signal can be synchronous, that is less flexible (and a hassle to describe in
285 Cλash, currently). Since every function in Cλash describes the behaviour on
286 each cycle boundary, we really can't fit in asynchronous behaviour easily.
288 Due to the same reason, multiple clock domains cannot be easily supported. There is
289 currently no way for the compiler to know in which clock domain a function
290 should operate and since the clock signal is never explicit, there is also no
291 way to express circuits that synchronize various clock domains.
293 A possible way to express more complex timing behaviour would be to make
294 functions more generic event handlers, where the system generates a stream of
295 events (Like \quote{clock up}, \quote{clock down}, \quote{input A changed},
296 \quote{reset}, etc.). When working with multiple clock domains, each domain
297 could get its own clock events.
299 As an example, we would have something like the following:
302 data Event = ClockUp | Reset | ...
304 type MainState = State Word
307 initstate :: MainState
310 main :: Word -> Event -> MainState -> (MainState, Word)
311 main inp event (State acc) = (State acc', acc')
314 -- On a reset signal, reset the accumulator and output
316 -- On a normal clock cycle, accumulate the result of func
317 ClockUp -> acc + (func inp event)
318 -- On any other event, leave state and output unchanged
321 -- func is some combinatoric expression
322 func :: Word -> Event -> Word
323 func inp _ = inp * 2 + 3
328 In this example, we see that every function takes an input of type
329 \hs{Event}. The function \hs{main} that takes the output of
330 \hs{func} and accumulates it on every clock cycle. On a reset signal, the
331 accumulator is reset. The function \hs{func} is just a combinatoric
332 function, with no synchronous elements. We can see this because there is no
333 state and the event input is completely ignored. If the compiler is smart
334 enough, we could even leave the event input out for functions that don't use
335 it, either because they are completely combinatoric (like in this example), or
336 because they rely on the the caller to select the clock signal.
338 This structure is similar to the event handling structure used to perform I/O
339 in languages like Amanda. \todo{ref} There is a top level case expression that
340 decides what to do depending on the current input event.
342 A slightly more complex example that shows a system with two clock domains.
343 There is no real integration between the clock domains (there is one input and
344 one output for each clock domain), but it does show how multiple clocks can be
348 data Event = ClockUpA | ClockUpB | ...
350 type MainState = State (SubAState, SubBState)
353 initstate :: MainState
354 initstate = State (initsubastate, initsubbstate)
356 main :: Word -> Word -> Event -> MainState -> (MainState, Word, Word)
357 main inpa inpb event (State (sa, sb)) = (State (sa', sb'), outa, outb)
359 -- Only update the substates if the corresponding clock has an up
367 -- Always call suba and subb, so we can always have a value for our output
369 (sa'', outa) = suba inpa sa
370 (sb'', outb) = subb inpb sb
372 type SubAState = State ...
373 suba :: Word -> SubAState -> (SubAState, Word)
376 type SubBState = State ...
377 subb :: Word -> SubAState -> (SubAState, Word)
381 Note that in the above example the \hs{suba} and \hs{subb} functions are
382 \emph{always} called, to get at their combinatoric outputs. The new state
383 is calculated as well, but only saved when the right clock has an up
386 As you can see, there is some code duplication in the case expression that
387 selects the right clock. One of the advantages of an explicit approach like
388 this, is that some of this duplication can be extracted away into helper
389 functions. For example, we could imagine a \hs{select_clock} function, which
390 takes a stateful function that is not aware of events, and changes it into a
391 function that only updates its state on a specific (clock) event.
394 select_clock :: Event
395 -> (input -> State s -> (State s, output))
396 -> (input -> State s -> Event -> (State s, output))
397 select_clock clock func inp state event = (state', out)
399 state' = if clock == event then state'' else state
400 (state'', out) = func inp state
402 main :: Word -> Word -> Event -> MainState -> (MainState, Word, Word)
403 main inpa inpb event (State (sa, sb)) = (State (sa', sb'), outa, outb)
405 (sa'', outa) = (select_clock ClockUpA suba) inpa sa event
406 (sb'', outb) = (select_clock ClockUpB subb) inpb sb event
409 As you can see, this can greatly reduce the length of the main function, while
410 increasing the readability. As you might also have noted, the select\_clock
411 function takes any stateful function from the current Cλash prototype and
412 turns it into an event-aware function!
414 Going along this route for more complex timing behaviour seems promising,
415 espicially since it seems possible to express very advanced timing behaviours,
416 while still allowing simple functions without any extra overhead when complex
417 behaviour is not needed.
419 The main cost of this approach will probably be extra complexity in the
420 compiler: The paths (state) data can take become very non-trivial, and it
421 is probably hard to properly analyze these paths and produce the
422 intended \VHDL description.
424 \section{Multiple cycle descriptions}
425 In the current Cλash prototype, every description is a single-cycle
426 description. In other words, every function describes behaviour for each
427 separate cycle and any recursion (or folds, maps, etc.) is expanded in space.
429 Sometimes, you might want to have a complex description that could possibly
430 take multiple cycles. Some examples include:
433 \item Some kind of map or fold over a list that could be expanded in time
434 instead of space. This would result in a function that describes n cycles
435 instead of just one, where n is the lenght of the list.
436 \item A large combinatoric expressions that would introduce a very long
437 combinatoric path and thus limit clock frequency. Such an expression could
438 be broken up into multiple stages, which effectively results in a pipelined
439 system (see also \in{section}[sec:future:pipelining]) with a known delay.
440 There should probably be some way for the developer to specify the cycle
441 division of the expression, since automatically deciding on such a division
442 is too complex and contains too many tradeoffs, at least initially.
443 \item Unbounded recursion. It is not possible to expand unbounded (or even
444 recursion with a depth that is not known at compile time) in space, since
445 there is no way to tell how much hardware to create (it might even be
448 When expanding infinite recursion over time, each step of the recursion can
449 simply become a single clockcycle. When expanding bounded but unknown
450 recursion, we probably need to add an extra data valid output bit or
454 Apart from translating each of these multiple cycle descriptions into a per-cycle
455 description, we also need to somehow match the type signature of the multiple
456 cycle description to the type signature of the single cycle description that
457 the rest of the system expects (assuming that the rest of the system is
458 described in the \quote{normal} per-cycle manner). For example, an infinitely
459 recursive expression typically has the return type \lam{[a]}, while the rest
460 of the system would expect just \lam{a} (since the recursive expression
461 generates just a single element each cycle).
463 Naively, this matching could be done using a (builtin) function with a
464 signature like \lam{[a] -> a}, which also serves as an indicator to the
465 compiler that some expanding over time is required. However, this poses a
466 problem for simulation: How will our Haskell implementation of this magical
467 builtin function know which element of the input list to return. This
468 obviously depends on the current cycle number, but there is no way for this
469 function to know the current cycle without breaking all kinds of safety and
470 purity properties. Making this function have some state input and output could
471 help, though this state is not present in the actual hardware (or perhaps
472 there is some state, if there are value passed from one recursion step to the
473 next, but how can the type of that state be determined by the typechecker?).
475 It seems that this is the most pressing problem for multicycle descriptions:
476 How to interface with the rest of the system, without sacrificing safety and
477 simulation capabilities?
479 \section{Higher order values in state}
480 Another interesting idea is putting a higher order value inside a function's
481 state value. Since we can use higher order values anywhere, why not in the
484 As a (contrived) example, consider the following accumulator:
487 -- The accumulator function that takes a word and returns a new accumulator
488 -- and the result so far. This is the function we want to put inside the
490 type Acc = Word -> (Acc, Word)
491 acc = \a -> (\b -> acc ( a + b ), a )
493 main :: Word -> State Acc -> (State Acc, Word)
494 main a s = (State s', out)
495 where (s', out) = s a
498 This code uses a function as its state, which implicitely contains the value
499 accumulated so far. This is a fairly trivial example, that is more easy to
500 write with a simple \hs{Word} state value, but for more complex descriptions
501 this style might pay off. Note that in a way we are using the
502 \emph{continuation passing style} of writing functions, where we pass a
503 sort of \emph{continuation} from one cycle to the next.
505 Our normalization process completely removes higher order values inside a
506 function by moving applications and higher order values around so that every
507 higher order value will eventually be full applied. However, if we would put a
508 higher order value inside our state, the path from the higher order value
509 definition to its application runs through the state, which is external to the
510 function. A higher order value defined in one cycle is not applied until a
511 later cycle. Our normalization process only works within the function, so it
512 cannot remove this use of a higher order value.
514 However, we cannot leave a higher order value in our state value, since it is
515 impossible to generate a register containing a higher order value, we simply
516 can't translate a function type to a hardware type. To solve this, we must
517 replace the higher order value inside our state with some other value
518 representing these higher order values.
520 On obvious approach here is to use an algebraic datatype where each
521 constructor represents one possible higher order value that can end up in the
522 state and each constructor has an argument for each free variable of the
523 higher order value replaced. This allows us to completely reconstruct the
524 higher order value at the spot where it is applied, and thus the higher order
527 This approach is commonly known as the \quote{Reynolds approach to
528 defuntionalization}, first described by J.C. Reynolds and seems to apply well
529 to this situation. One note here is that Reynolds' approach puts all the
530 higher order values in a single datatype. For a typed language, we will at
531 least have to have a single datatype for each function type, since we can't
532 mix them. It would be even better to split these datatypes a bit further, so
533 that such a datatype will never hold a constructor that is never used for a
534 particular state variable. This separation is probably a non-trivial problem,
537 \todo{Reference "Definitional interpreters for higher-order programming
539 http://portal.acm.org/citation.cfm?id=805852\&dl=GUIDE\&coll=GUIDE\&CFID=58835435\&CFTOKEN=81856623}
541 \section{New language}
542 During the development of the prototype, it has become clear that Haskell is
543 not the perfect language for this work. There are two main problems:
545 \item Haskell is not expressive enough. Haskell is still quite new in the area
546 of dependent typing, something we use extensively. Also, Haskell has some
547 special syntax to write monadic composition and arrow composition, which is
548 well suited to normal Haskell programs. However, for our hardware
549 descriptions, we could use some similar, but other, syntactic sugar (as we
551 \item Haskell is too expressive. There are some things that Haskell can
552 express, but we can never properly translate. There are certain types (both
553 primitive types and certain type constructions like recursive types) we can
554 never translate, as well as certain forms of recursion.
557 It might be good to reevaluate the choice of language for Cλash, perhaps there
558 are other languages which are better suited to describe hardware, now that
559 we've learned a lot about it.
561 Perhaps Haskell (or some other language) can be extended by using a
562 preprocessor. There has been some (point of proof) work on a the Strathclyde
563 Haskell Enhancement (\small{SHE}) preprocessor for type-level programming,
564 perhaps we can extend that, or create something similar for hardware-specific
567 It is not unlikely that the most flexible way
568 forward would be to define a completely new language with exactly the needed
569 features. This is of course an enormous effort, which should not be taken
572 \section{Don't care values}
573 A powerful value in \VHDL is the \emph{don't care} value, given as
574 \type{'-'}. This value tells the compiler that you don't really care about
575 which value is assigned to a signal, allowing the compiler to make some
576 optimizations. Since hardware choice in hardware is often implemented using
577 a collection of logic gates instead of multiplexers only, synthesizers can
578 often reduce the amount of hardware needed by smartly choosing values for
579 these don't care cases.
581 There is not really anything comparable with don't care values in normal
582 programming languages. The closest thing is an undefined or uninitialized
583 value, though those are usually associated with error conditions.
585 It would be useful if Cλash also has some way to specify don't care values.
586 When looking at the Haskell typing system, there are really two ways to do
590 \item Add a special don't care value to every datatype. This includes the
591 obvious \hs{Bit} type, but will also need to include every user defined
592 type. An exception can be made for vectors and product types, since those
593 can simply use the don't care values of the types they contain.
595 This would also require some kind of \quote{Dont careable} type class
596 that allows each type to specify what its don't care value is. The
597 compiler must then recognize this constant and replace it with don't care
598 values in the final \VHDL code.
600 This is of course a very intrusive solution. Every type must become member
601 of this typeclass, and there is now some member in every type that is a
602 special don't care value. Guaranteeing the obvious don't care semantics
603 also becomes harder, since every pattern match or case expressions must now
604 also take care of the don't care value (this might actually be an
605 advantage, since it forces designers to specify how to handle don't care
606 for different operations).
608 \item Use the special \hs{undefined}, or \emph{bottom} value present in
609 Haskell. This is a type that is member of all types automatically, without
610 any explicit declarations.
612 Any expression that requires evaluation of an undefined value
613 automatically becomes undefined itself (or rather, there is some exception
614 mechanism). Since Haskell is lazy, this means that whenever it tries to
615 evaluate undefined, it is apparently required for determining the output
616 of the system. This property is useful, since typically a don't care
617 output is used when some output is not valid and should not be read. If
618 it is in fact not read, it should never be evaluated and simulation should
621 In practice, this works less ideal. In particular, pattern matching is not
622 always smart enough to deal with undefined. Consider the following
623 definition of an \hs{and} operator:
626 or :: Bit -> Bit -> Bit
632 When using the \hs{and} operation on an undefined (don't care) and a Low
633 value should always return a Low value. Its value does not depend on the
634 value chosen for the don't care value. However, though when applying the
635 above and function to \hs{Low} and \hs{undefined} results in exactly that
636 behviour, the result is \hs{undefined} when the arguments are swapped.
637 This is because the first pattern forces the first argument to be
638 evaluated. If it is \hs{undefined}, evaluation is halted and an exception
639 is show, which is not what is intended.
642 These options should be explored further to see if they provide feasible
643 methods for describing don't care conditions. Possibly there are completely
644 other methods which work better.
646 \section{Correctness proofs of the normalization system}
647 As stated in \in{section}[sec:normalization:properties], there are a
648 number of properties we would like to see verified about the
649 normalization system. In particular, the \emph{termination} and
650 \emph{completeness} of the system would be a good candidate for future
651 research. Specifying formal semantics for the Core language in
652 order to verify the \emph{soundness} of the system would be an even more
654 % vim: set sw=2 sts=2 expandtab: