Out-of-order execution

From Wikipedia, the free encyclopedia

In computer engineering, out-of-order execution (or more formally dynamic execution) is a paradigm used in most high-performance central processing units to make use of instruction cycles that would otherwise be wasted. In this paradigm, a processor executes instructions in an order governed by the availability of input data and execution units,[1] rather than by their original order in a program.[2] In doing so, the processor can avoid being idle while waiting for the preceding instruction to complete and can, in the meantime, process the next instructions that are able to run immediately and independently.[3]

History[]

Out-of-order execution is a restricted form of data flow computation, which was a major research area in computer architecture in the 1970s and early 1980s.

Important academic research in this subject was led by Yale Patt and his simulator.[4] A paper by James E. Smith and A. R. Pleszkun, published in 1985 completed the scheme by describing how the precise behavior of exceptions could be maintained in out-of-order machines.

Arguably the first machine to use out-of-order execution is the CDC 6600 (1964), which used a scoreboard to resolve conflicts. The 6600 however lacked WAW conflict handling, choosing instead to stall. This situation was termed a "First Order Conflict" by Thornton.[5] Whilst it had both RAW conflict resolution (termed "Second Order Conflict"[6]) and WAR conflict resolution (termed "Third Order Conflict"[7]) all of which is sufficient to declare it capable of full out-of-order execution, the 6600 did not have precise exception handling. An early and limited form of Branch prediction was possible as long as the branch was to locations on what was termed the "Instruction Stack" which was limited to within a depth of seven words from the Program Counter.[8]

About three years later, the IBM System/360 Model 91 (1966) introduced Tomasulo's algorithm, which makes full out-of-order execution possible. In 1990, IBM introduced the first out-of-order microprocessor, the POWER1, although out-of-order execution is limited to floating-point instructions (as is also the case on the Model 91[9]).

In the 1990s, out-of-order execution became more common, and was featured in the IBM/Motorola PowerPC 601 (1993), Fujitsu/HAL SPARC64 (1995), Intel Pentium Pro (1995), MIPS R10000 (1996), HP PA-8000 (1996), AMD K5 (1996) and DEC Alpha 21264 (1996). Notable exceptions to this trend include the Sun UltraSPARC, HP/Intel Itanium, Intel Atom until Silvermont Architecture,[10] and the IBM POWER6.

The high logical complexity of the out-of-order technique is the reason that it did not reach mainstream machines until the mid-1990s. Many low-end processors meant for cost-sensitive markets still do not use this paradigm due to the large silicon area required for its implementation. Low power usage is another design goal that is harder to achieve with an out-of-order execution (OoOE) design.

Basic concept[]

To appreciate OoO Execution it is useful to first describe in-order, to be able to make a comparison of the two. Instructions cannot be completed instantaneously: they take time (multiple cycles). Therefore, results will lag behind where they are needed. In-order still has to keep track of the dependencies. Its approach is however quite unsophisticated: stall, every time. OoO uses much more sophisticated data tracking techniques, as seen below.

In-order processors[]

In earlier processors, the processing of instructions is performed in an instruction cycle normally consisting of the following steps:

  1. Instruction fetch.
  2. If input operands are available (in processor registers, for instance), the instruction is dispatched to the appropriate functional unit. If one or more operands are unavailable during the current clock cycle (generally because they are being fetched from memory), the processor stalls until they are available.
  3. The instruction is executed by the appropriate functional unit.
  4. The functional unit writes the results back to the register file.

Often, an in-order processor will have a straightforward "bit vector" into which it is recorded which registers a pipeline will (eventually) write to. If any input operands have the corresponding bit set in this vector, the instruction stalls. Essentially, the vector performs a greatly simplified role of protecting against register hazards. Thus we observe that Out-of-order uses 2D Matrices where In-order uses a 1D vector for hazard avoidance.[citation needed]

Out-of-order processors[]

This new paradigm breaks up the processing of instructions into these steps:

  1. Instruction fetch.
  2. Instruction dispatch to an instruction queue (also called instruction buffer or reservation stations).
  3. The instruction waits in the queue until its input operands are available. The instruction can leave the queue before older instructions.
  4. The instruction is issued to the appropriate functional unit and executed by that unit.
  5. The results are queued.
  6. Only after all older instructions have their results written back to the register file, then this result is written back to the register file. This is called the graduation or retire stage.

The key concept of OoOE processing is to allow the processor to avoid a class of stalls that occur when the data needed to perform an operation are unavailable. In the outline above, the OoOE processor avoids the stall that occurs in step (2) of the in-order processor when the instruction is not completely ready to be processed due to missing data.

OoOE processors fill these "slots" in time with other instructions that are ready, then re-order the results at the end to make it appear that the instructions were processed as normal. The way the instructions are ordered in the original computer code is known as program order, in the processor they are handled in data order, the order in which the data, operands, become available in the processor's registers. Fairly complex circuitry is needed to convert from one ordering to the other and maintain a logical ordering of the output; the processor itself runs the instructions in seemingly random order.

The benefit of OoOE processing grows as the instruction pipeline deepens and the speed difference between main memory (or cache memory) and the processor widens. On modern machines, the processor runs many times faster than the memory, so during the time an in-order processor spends waiting for data to arrive, it could have processed a large number of instructions.

Dispatch and issue decoupling allows out-of-order issue[]

One of the differences created by the new paradigm is the creation of queues that allows the dispatch step to be decoupled from the issue step and the graduation stage to be decoupled from the execute stage. An early name for the paradigm was decoupled architecture. In the earlier in-order processors, these stages operated in a fairly lock-step, pipelined fashion.

The instructions of the program may not be run in the originally specified order, as long as the end result is correct. It separates the fetch and decode stages from the execute stage in a pipelined processor by using a buffer.

The buffer's purpose is to partition the memory access and execute functions in a computer program and achieve high-performance by exploiting the fine-grain parallelism between the two.[11] In doing so, it effectively hides all memory latency from the processor's perspective.

A larger buffer can, in theory, increase throughput. However, if the processor has a branch misprediction then the entire buffer may need to be flushed, wasting a lot of clock cycles and reducing the effectiveness. Furthermore, larger buffers create more heat and use more die space. For this reason processor designers today favour a multi-threaded design approach.

Decoupled architectures are generally thought of as not useful for general purpose computing as they do not handle control intensive code well.[12] Control intensive code include such things as nested branches that occur frequently in operating system kernels. Decoupled architectures play an important role in scheduling in very long instruction word (VLIW) architectures.[13]

To avoid false operand dependencies, which would decrease the frequency when instructions could be issued out of order, a technique called register renaming is used. In this scheme, there are more physical registers than defined by the architecture. The physical registers are tagged so that multiple versions of the same architectural register can exist at the same time.

Execute and writeback decoupling allows program restart[]

The queue for results is necessary to resolve issues such as branch mispredictions and exceptions/traps. The results queue allows programs to be restarted after an exception, which requires the instructions to be completed in program order. The queue allows results to be discarded due to mispredictions on older branch instructions and exceptions taken on older instructions.

The ability to issue instructions past branches that are yet to resolve is known as speculative execution.

Micro-architectural choices[]

  • Are the instructions dispatched to a centralized queue or to multiple distributed queues?
IBM PowerPC processors use queues that are distributed among the different functional units while other out-of-order processors use a centralized queue. IBM uses the term reservation stations for their distributed queues.
  • Is there an actual results queue or are the results written directly into a register file? For the latter, the queueing function is handled by register maps that hold the register renaming information for each instruction in flight.
Early Intel out-of-order processors use a results queue called a re-order buffer, while most later out-of-order processors use register maps.
More precisely: Intel P6 family microprocessors have both a re-order buffer (ROB) and a register alias table (RAT). The ROB was motivated mainly by branch misprediction recovery.
The Intel P6 family was among the earliest OoOE microprocessors, but were supplanted by the NetBurst architecture. Years later, Netburst proved to be a dead end due to its long pipeline that assumed the possibility of much higher operating frequencies. Materials were not able to match the design's ambitious clock targets due to thermal issues and later designs based on NetBurst, namely Tejas and Jayhawk, were cancelled. Intel reverted to the P6 design as the basis of the Core and Nehalem microarchitectures. The succeeding Sandy Bridge, Ivy Bridge, and Haswell microarchitectures are a departure from the reordering techniques used in P6 and employ re-ordering techniques from the EV6 and the P4 but with a somewhat shorter pipeline.[14][15]

See also[]

References[]

  1. ^ Kukunas, Jim (2015). Power and Performance: Software Analysis and Optimization. Morgan Kaufman. p. 37. ISBN 9780128008140.
  2. ^ "Out-of-order execution" (PDF). cs.washington.edu. 2006. Retrieved 2014-01-17. don't wait for previous instructions to execute if this instruction does not depend on them
  3. ^ "Out-of-order Execution". pcguide.com. Retrieved 2014-01-17. This flexibility improves performance since it allows execution with less 'waiting' time.
  4. ^ Hwu, W.; Patt, Yale N. (1986). HPSm, a high performance restricted data flow architecture having minimal functionality. ISCA '86 Proceedings of the 13th annual international symposium on Computer architecture. ACM. pp. 297–306. ISBN 978-0-8186-0719-6. Retrieved 2013-12-06.
  5. ^ Thornton (1970, p. 125)
  6. ^ Thornton (1970, p. 126)
  7. ^ Thornton 1970, p. 127
  8. ^ Thornton 1970, p. 112,123
  9. ^ Tomasulo, Robert Marco (1967), "An Efficient Algorithm for Exploiting Multiple Arithmetic Units" (PDF), IBM Journal of Research and Development, 11 (1): 25–33, CiteSeerX 10.1.1.639.7540, doi:10.1147/rd.111.0025, S2CID 8445049, archived from the original (PDF) on 2018-06-12
  10. ^ Anand Lal Shimpi (2013-05-06). "Intel's Silvermont Architecture Revealed: Getting Serious About Mobile". AnandTech.
  11. ^ Smith, J. E. (1984). "Decoupled access/execute computer architectures". ACM Transactions on Computer Systems. 2 (4): 289–308. CiteSeerX 10.1.1.127.4475. doi:10.1145/357401.357403. S2CID 13903321.
  12. ^ Kurian, L.; Hulina, P. T.; Coraor, L. D. (1994). "Memory latency effects in decoupled architectures" (PDF). IEEE Transactions on Computers. 43 (10): 1129–1139. doi:10.1109/12.324539. S2CID 6913858. Archived from the original (PDF) on 2018-06-12.
  13. ^ Dorojevets, M. N.; Oklobdzija, V. (1995). "Multithreaded decoupled architecture". International Journal of High Speed Computing. 7 (3): 465–480. doi:10.1142/S0129053395000257.
  14. ^ Kanter, David (2010-09-25). "Intel's Sandy Bridge Microarchitecture".
  15. ^ "The Haswell Front End - Intel's Haswell Architecture Analyzed: Building a New PC and a New Intel".

Further reading[]

  • Smith, James E.; Pleszkun, A. R. (June 1985). "Implementation of precise interrupts in pipelined processors". ACM SIGARCH Computer Architecture News. 13 (3): 36–44. doi:10.1145/327070.327125.
Retrieved from ""