+but I will highlight the most important challenges and tradeoffs.
+
+\subsubsection{Tradeoffs}
+In general, the most important tradeoff seems to be between flexibility and
+everything else (code size, performance, complexity, hardware area). This
+flexibility is defined in terms of possible instructions, present
+connections, register files sizes, etc.
+
+An important reason to be flexible is for programmability. If the hardware is
+regular, making a compiler that produces optimal code gets a lot easier.
+On the other hand, the compiler also limits flexibility. If the hardware
+has flexibility that the compiler will never use, it's better to save
+area and complexity by making the hardware less flexible. Exactly for this
+reason, it is important to develop hardware and supporting software in parallel,
+instead of using the hardware first, software later approach used with the
+initial Montium. This allows for a much better balanced and usable design,
+without any unused extras.
+
+\subsubsection{Inner loop pipelining}
+When trying to improve runtime performance, the main focus is on
+optimizing loops, and inner loops (loops that contain no other loops) in
+particular. Since inner loops are executed the most often (compared to
+other loops and code), it is the most
+efficient to optimize inner loops. Also, inner loops can most optimally
+use the parellel processing power of the Montium, because they can be
+software pipelined.
+
+Software pipelining means that the compiler will emit code that performs
+operations that belong in different iterations of the original loop during the
+same cycle. Since data dependencies within a loop body usually severely limit
+the amount of operations that can be done in parallel, pipelining allows the
+second (and further) iteration to start well before the first iteration is done.
+This is done by dividing the loop body in a number of stages, that would
+normally be executed sequentially. These stages are then executed in parallel,
+but for different iterations (ie, run stage 2 of iteration i, while running
+stage 1 of iteration i+1).
+
+This approach allows a loop to be ran a lot faster than executing a
+single iteration at a time. However, since the instructions for the
+first and last few iterations (the prologue and epilogue) are distinctly
+different from the loop "kernel", the number of instructions needed for
+a pipelined loop can easily increase a lot.
+
+However, all pipelined loops share a very distinct structure: first
+stage 1, then stage 1+2, then stage 1+2+3, etc, then all stages at the
+same time, similar for the epilogue. Also, every instruction in the
+prologue and epilogue are a strict subset of the instructions in the
+kernel. By adding some hardware support for exactly this structure, the
+code size increase for the prologue and epilogue can be effectively
+reduced to a fixed number of instructions (which take the number of stages as a
+parameter and use preloaded instructions with explicit stage annotation).
+
+The tradeoff here is that this hardware is only usable specifically for these
+inner loops, any other code will leave this extra hardware unused. However,
+since pipelined inner loops appear to be very common, this should not be a
+problem at all.
+
+\subsubsection{Code compression}
+Another very important tradeoff concerns codesize. In the old hardware, a lot of
+flexibility in the original code was achieved by using inline functions (since
+the hardware has only very limited support for code reuse through function
+calls). This resulted in a lot of code duplication, which was compensated for by
+using two level configuration registers to be able to reuse (partial)
+instructions nonetheless (which will still need a lot of sequencer instructions,
+but those will be a lot smaller than the full instruction).
+
+On the new hardware, however, function calls are more powerful, which should
+lead to a lot less code duplication. For this reason, putting every instruction
+in configuration registers might actually take more space instead of less. It
+should be noted that the configuration registers of the old Montium are
+effectively a compiler controlled cache that is mandatory and static
+(instructions must be in the cache and the cache cannot be modified at runtime).
+By lifting these limitations, we get a cache that is a lot more flexible.
+However, since it is compiler controlled (as opposed to letting the hardware
+decide what ends up in the cache), the performance of the code remains very
+predictable (unlike traditional caches). Additionally, this has the advantage
+that the size of the cache is no longer an upper bound on the program size, but
+only the instruction memory is (which can be a lot bigger).
+
+The tradeoff here is that the sequencer instructions will get a lot bigger,
+since they need to contain a full instruction word (which would be preloaded
+into the CRs in the old design) which can be up to a few hundred bits long.
+Since not all sequencer instructions need to be this long (executing out of the
+cache should be a lot shorter), different instruction lengths should be
+supported. Also, some smart compression techniques should also be applied to
+those long instruction words, though that can only be done once there is some
+more application code to gather statistics about.