Speculative execution

From Wikipedia, the free encyclopedia

Speculative execution is an optimization technique where a computer system performs some task that may not be needed. Work is done before it is known whether it is actually needed, so as to prevent a delay that would have to be incurred by doing the work after it is known that it is needed. If it turns out the work was not needed after all, most changes made by the work are reverted and the results are ignored.

The objective is to provide more concurrency if extra resources are available. This approach is employed in a variety of areas, including branch prediction in pipelined processors, value prediction for exploiting value locality, prefetching memory and files, and optimistic concurrency control in database systems.[1][2][3]

Speculative multithreading is a special case of speculative execution.

Overview[]

Modern pipelined microprocessors use speculative execution to reduce the cost of conditional branch instructions using schemes that predict the execution path of a program based on the history of branch executions.[2] In order to improve performance and utilization of computer resources, instructions can be scheduled at a time when it has not yet been determined that the instructions will need to be executed, ahead of a branch.[4]

Variants[]

Speculative computation was a related earlier concept.[5]

Eager execution[]

Eager execution is a form of speculative execution where both sides of the conditional branch are executed; however, the results are committed only if the predicate is true. With unlimited resources, eager execution (also known as oracle execution) would in theory provide the same performance as perfect branch prediction. With limited resources, eager execution should be employed carefully, since the number of resources needed grows exponentially with each level of branch executed eagerly.[6]

Predictive execution[]

Predictive execution is a form of speculative execution where some outcome is predicted and execution proceeds along the predicted path until the actual result is known. If the prediction is true, the predicted execution is allowed to commit; however, if there is a misprediction, execution has to be unrolled and re-executed. Common forms of this include branch predictors and memory dependence prediction. A generalized form is sometimes referred to as value prediction.[7]

Related concepts[]

Lazy execution[]

Lazy execution is the opposite of eager execution, and does not involve speculation. The incorporation of speculative execution into implementations of the Haskell programming language, a lazy language, is a current research topic. Eager Haskell, a variant of the language, is designed around the idea of speculative execution. A 2003 PhD thesis made GHC support a kind of speculative execution with an abortion mechanism to back out in case of a bad choice called optimistic execution.[8] It was deemed too complicated.[9]

Security vulnerabilities[]

Starting in 2017, a series of security vulnerabilities were found in the implementations of speculative execution on common processor architectures which effectively enabled an elevation of privileges.

These include:

  • Spectre
  • Meltdown
  • SPOILER
  • Foreshadow
  • Microarchitectural Data Sampling

See also[]

References[]

  1. ^ Lazy and Speculative Execution Butler Lampson Microsoft Research OPODIS, Bordeaux, France 12 December 2006
  2. ^ a b International Business Machines Corporation. Research Division; Prabhakar Raghavan; Hadas Schachnai; Mira Yaniv (1998). Dynamic schemes for speculative execution of code. IBM. Retrieved 18 January 2011.
  3. ^ Kung, H. T.; John T. Robinson (June 1981). "On optimistic methods for concurrency control" (PDF). ACM Trans. Database Syst. 6.
  4. ^ Bernd Krieg-Brückner (1992). ESOP '92: 4th European Symposium on Programming, Rennes, France, February 26-28, 1992: proceedings. Springer. pp. 56–57. ISBN 978-3-540-55253-6. Retrieved 18 January 2011.
  5. ^ Randy B. Osborne (1990-03-21). "Speculative Computation in Multilisp". Parallel Lisp: Languages and Systems (PS). Lecture Notes in Computer Science. 441. Digital Equipment Corporation Research Lab. pp. 103–137. doi:10.1007/BFb0024152. ISBN 3-540-52782-6. Retrieved 2018-01-26.
  6. ^ Jurij Šilc; Borut Robič; Theo Ungerer (1999). Processor architecture: from dataflow to superscalar and beyond. Springer. pp. 148–150. ISBN 978-3-540-64798-0. Retrieved 21 January 2011.
  7. ^ Mark D., Hill; Norman P., Jouppi; Gourindar S., Sohi (2000). Readings in Computer Architecture. Morgan Kaufman. ISBN 9781558605398. Retrieved 5 January 2018.
  8. ^ Jones, Simon Peyton; Ennals, Robert (1 August 2003). "Optimistic Evaluation: a fast evaluation strategy for non-strict programs". Retrieved 15 May 2019 – via www.microsoft.com. Cite journal requires |journal= (help)
  9. ^ https://mail.haskell.org/pipermail/haskell/2006-August/018424.html
Retrieved from ""