Dimitri Bertsekas

From Wikipedia, the free encyclopedia
Dimitri P. Bertsekas[1]
Dimitri Wiki Pict.jpg
Born1942
NationalityGreek
CitizenshipAmerican, Greece
Alma materNational Technical University of Athens(1968)[2]
Known forNonlinear programming
Convex optimization
Dynamic programming
Approximate dynamic programming
Stochastic systems and Optimal control
Data communication network optimization
Awards1997 INFORMS Computing Society (ICS) Prize
1999 Greek National Award for Operations Research
2001 ACC John R. Ragazzini Education Award
2001 Member of the United States National Academy of Engineering
2009 INFORMS Expository Writing Award
2014 AACC Richard E. Bellman Control Heritage Award
2014 INFORMS Khachiyan Prize
2015 SIAM/MOS Dantzig Prize
2018 INFORMS John von Neumann Theory Prize
2022 IEEE Control Systems Award
Scientific career
FieldsOptimization, Mathematics, Control theory, and Data communication networks
InstitutionsThe George Washington University
Stanford University
University of Illinois at Urbana-Champaign
Massachusetts Institute of Technology
ThesisControl of Uncertain Systems with a Set-Membership Description of the Uncertainty (1971)
Doctoral advisorIan Burton Rhodes[3]
Other academic advisorsMichael Athans
Doctoral studentsSteven E. Shreve
Paul Tseng

Dimitri Panteli Bertsekas (born 1942, Athens, Greek: Δημήτρης Παντελής Μπερτσεκάς) is an applied mathematician, electrical engineer, and computer scientist, a McAfee Professor at the Department of Electrical Engineering and Computer Science in School of Engineering at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, and also a Fulton Professor of Computational Decision Making at Arizona State University, Tempe.

Biography[]

Bertsekas was born in Greece and lived his childhood there. He studied for five years at the National Technical University of Athens, Greece and studied for about a year and a half at The George Washington University, Washington, D.C., where he obtained his M.S. in electrical engineering in 1969, and for about two years at MIT, where he obtained his doctorate in system science in 1971. Prior to joining the MIT faculty in 1979, he taught for three years at the Engineering-Economic Systems Dept. of Stanford University, and for five years at the Electrical and Computer Engineering Dept. of the University of Illinois at Urbana-Champaign. In 2019, he was appointed a full-time professor at the School of Computing, Informatics, and Decision Systems Engineering at Arizona State University, Tempe, while maintaining a research position at MIT.[4][5]

He is known for his research work, and for his eighteen textbooks and monographs in theoretical and algorithmic optimization and control, and in applied probability. His work ranges from theoretical/foundational work, to algorithmic analysis and design for optimization problems, and to applications such as data communication and transportation networks, and electric power generation. He is featured among the top 100 most cited computer science authors[6] in the CiteSeer search engine academic database[7] and digital library.[8] In 1995, he co-founded a publishing company, Athena Scientific, that among others, publishes most of his books.

In the late 1990s Bertsekas developed a strong interest in digital photography. His photographs have been exhibited on several occasions at MIT.[9]

Awards and honors[]

Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science[10] for his book "Neuro-Dynamic Programming" (co-authored with John N. Tsitsiklis); the 2000 Greek National Award for Operations Research; and the 2001 ACC John R. Ragazzini Education Award for outstanding contributions to education.[11] In 2001, he was elected to the US National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks".[12] In 2009, he was awarded the 2009 INFORMS Expository Writing Award for his ability to "communicate difficult mathematical concepts with unusual clarity, thereby reaching a broad audience across many disciplines. " [13] In 2014 he received the Richard E. Bellman Control Heritage Award from the American Automatic Control Council,[14][15] the Khachiyan Prize for life-time achievements in the area of optimization from the INFORMS Optimization Society.[16] Also he received the 2015 Dantzig prize from SIAM and the Mathematical Optimization Society,[17] the 2018 INFORMS John von Neumann Theory Prize (jointly with Tsitsiklis) for the books "Neuro-Dynamic Programming" and "Parallel and Distributed Algorithms",[13] and the 2022 IEEE Control Systems Award for “fundamental contributions to the methodology of optimization and control”, and “outstanding monographs and textbooks”.[18]

Textbooks and research monographs[]

Bertsekas' textbooks include

  • Dynamic Programming and Optimal Control (1996)
  • Data Networks (1989, co-authored with Robert G. Gallager)
  • Nonlinear Programming (1996)
  • Introduction to Probability (2003, co-authored with John N. Tsitsiklis)
  • Convex Optimization Algorithms (2015)

all of which are used for classroom instruction at MIT.[19][20] Some of these books have been published in multiple editions, and have been translated in various foreign languages.

He has also written several research monographs,[21] which collectively contain most of his research. These include:

  • "Stochastic Optimal Control: The Discrete-Time Case" (1978, co-authored with S. E. Shreve), a mathematically complex work, establishing the measure-theoretic foundations of dynamic programming and stochastic control.
  • "Constrained Optimization and Lagrange Multiplier Methods" (1982), the first monograph that addressed comprehensively the algorithmic convergence issues around augmented Lagrangian and sequential quadratic programming methods.
  • "Parallel and Distributed Computation: Numerical Methods" (1989, co-authored with John N. Tsitsiklis), which among others established the fundamental theoretical structures for the analysis of distributed asynchronous algorithms.
  • "Linear Network Optimization" (1991) and "Network Optimization: Continuous and Discrete Models" (1998), which among others discuss comprehensively the class of auction algorithms for assignment and network flow optimization, developed by Bertsekas over a period of 20 years starting in 1979.
  • "Neuro-Dynamic Programming"(1996, co-authored with Tsitsiklis), which laid the theoretical foundations for suboptimal approximations of highly complex sequential decision-making problems.
  • "Convex Analysis and Optimization" (2003, co-authored with A. Nedic and A. Ozdaglar) and Convex Optimization Theory (2009), which provided a new line of development for optimization duality theory, a new connection between the theory of Lagrange multipliers and nonsmooth analysis, and a comprehensive development of incremental subgradient methods.
  • "Abstract Dynamic Programming" (2013), which aims at a unified development of the core theory and algorithms of total cost sequential decision problems, based on the strong connections of the subject with fixed point theory. A 2nd edition of this monograph, which includes most of his research on dynamic programming in the period 2013-2017, appeared in 2018.

His latest research monographs are Reinforcement Learning and Optimal Control (2019), which aims to explore the common boundary between dynamic programming/optimal control and artificial intelligence, and to form a bridge that is accessible by workers with background in either field, and Rollout, Policy Iteration, and Distributed Reinforcement Learning (2020), which focuses on the fundamental idea of policy iteration, its one iteration counterpart rollout, and their distributed and multiagent implementations. Some of these methods have been the backbones for high-profile successes in games such as chess, Go, and backgammon.[22][23][24]

See also[]

References[]

  1. ^ Dimitri Bertsekas was elected in 2001 as a member of National Academy of Engineering in Electronics, Communication & Information Systems Engineering for pioneering contributions to fundamental research, practice, and education of optimization/control theory, and especially its application to data communication networks.
  2. ^ Dimitri P. Bertsekas' biography
  3. ^ Dimitri Bertsekas at the Mathematics Genealogy Project
  4. ^ Biography from Bertsekas' MIT Home Page
  5. ^ Biography from Bertsekas' ASU Home Page
  6. ^ One of the top 100 most cited computer science authors
  7. ^ Citeseer Most cited authors in Computer Science - August 2006
  8. ^ Google Scholar citations
  9. ^ Photo exhibition Archived 2010-06-21 at the Wayback Machine at MIT
  10. ^ Election citation of 1997 INFORMS ICS prize
  11. ^ 2001 ACC John R. Ragazzini Education Award
  12. ^ Election citation Archived 2010-05-28 at the Wayback Machine by National Academy of Engineering
  13. ^ Jump up to: a b "2009 Saul Gass Expository Writing Award". informs. The Institute for Operations Research and the Management Sciences.
  14. ^ Bellman award to Bertsekas
  15. ^ Acceptance speech for Bellman award
  16. ^ "Khachiyan Prize Citation". Archived from the original on 2016-03-04. Retrieved 2014-11-02.
  17. ^ Dantzig Prize Citation
  18. ^ "Current IEEE Corporate Award Recipients". IEEE Awards. Retrieved 2021-07-11.
  19. ^ MIT Open Course Ware
  20. ^ Course 6.253 Convex Analysis and Optimization from MIT OCW
  21. ^ Books by Dimitri Bertsekas
  22. ^ Tesauro, Gerald (1995-03-01). "Temporal difference learning and TD-Gammon". Communications of the ACM. 38 (3): 58–68. doi:10.1145/203330.203343. ISSN 0001-0782. S2CID 8763243.
  23. ^ Silver, David; Schrittwieser, Julian; Simonyan, Karen; Antonoglou, Ioannis; Huang, Aja; Guez, Arthur; Hubert, Thomas; Baker, Lucas; Lai, Matthew; Bolton, Adrian; Chen, Yutian (October 2017). "Mastering the game of Go without human knowledge". Nature. 550 (7676): 354–359. Bibcode:2017Natur.550..354S. doi:10.1038/nature24270. ISSN 1476-4687. PMID 29052630. S2CID 205261034.
  24. ^ Silver, David; Hubert, Thomas; Schrittwieser, Julian; Antonoglou, Ioannis; Lai, Matthew; Guez, Arthur; Lanctot, Marc; Sifre, Laurent; Kumaran, Dharshan; Graepel, Thore; Lillicrap, Timothy (2017-12-05). "Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm". arXiv:1712.01815 [cs.AI].

External links[]

Retrieved from ""