References

  1. Jean-François Baffier, Man-Kwun Chiu, Yago Diez, Matias Korman, Valia Mitsou, André van Renssen, Marcel Roeloffzen & Yushi Uno (2017): Hanabi is NP-hard, even for cheaters who look at their cards 675, pp. 43–55, doi:10.1016/j.tcs.2017.02.024.
  2. Anton Bakhtin, David J. Wu, Adam Lerer & Noam Brown (2021): No-Press Diplomacy from Scratch. Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pp. 18063–18074. Available at https://proceedings.neurips.cc/paper/2021/hash/95f2b84de5660ddf45c8a34933a2e66f-Abstract.html.
  3. Christopher Berner, Greg Brockman, Brooke Chan, Vicki Cheung, Przemysław Dȩbiak, Christy Dennison, David Farhi, Quirin Fischer, Shariq Hashme & Chris Hesse (2019): Dota 2 with large scale deep reinforcement learning. ArXiv:1912.06680.
  4. Andrea Celli, Marco Ciccone, Raffaele Bongo & Nicola Gatti (2019): Coordination in adversarial sequential team games via multi-agent deep reinforcement learning. ArXiv:1912.07712.
  5. Michael J. Coulombe & Jayson Lynch (2018): Cooperating in Video Games? Impossible! Undecidability of Team Multiplayer Games. 9th International Conference on Fun with Algorithms (FUN 2018) 100, pp. 14:1–14:16, doi:10.4230/LIPIcs.FUN.2018.14.
  6. Erik D. Demaine & Robert A. Hearn (2008): Constraint logic: A uniform framework for modeling computation as games. In: 2008 23rd Annual IEEE Conference on Computational Complexity. IEEE, College Park, MD, USA, pp. 149–162, doi:10.1109/CCC.2008.35.
  7. Jakob N. Foerster, Yannis M. Assael, Nando de Freitas & Shimon Whiteson (2016): Learning to Communicate with Deep Multi-Agent Reinforcement Learning. Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain, pp. 2137–2145. Available at https://proceedings.neurips.cc/paper/2016/hash/c7635bfd99248a2cdef8249ef7bfbef4-Abstract.html.
  8. Natasha Jaques, Angeliki Lazaridou, Edward Hughes, Caglar Gulcehre, Pedro Ortega, Dj Strouse, Joel Z. Leibo & Nando De Freitas (2019): Social Influence as Intrinsic Motivation for Multi-Agent Deep Reinforcement Learning. Proceedings of the 36th International Conference on Machine Learning 97, pp. 3040–3049. Available at https://proceedings.mlr.press/v97/jaques19a.html.
  9. Philip Paquette, Yuchen Lu, Steven Bocco, Max O. Smith, Satya Ortiz-Gagne, Jonathan K. Kummerfeld, Joelle Pineau, Satinder Singh & Aaron C. Courville (2019): No-Press Diplomacy: Modeling Multi-Agent Gameplay. Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pp. 4476–4487. Available at https://proceedings.neurips.cc/paper/2019/hash/84b20b1f5a0d103f5710bb67a043cd78-Abstract.html.
  10. Gary Peterson, John Reif & Salman Azhar (2001): Lower bounds for multiplayer noncooperative games of incomplete information. Computers & Mathematics with Applications 41(7-8), pp. 957–992, doi:10.1016/S0898-1221(00)00333-3.
  11. Gary L. Peterson & John H. Reif (1979): Multiple-person alternation. In: 20th Annual Symposium on Foundations of Computer Science (sfcs 1979). IEEE, San Juan, Puerto Rico, pp. 348–363, doi:10.1109/SFCS.1979.25.
  12. Frederick Reiber (2021): The Crew: The Quest for Planet Nine is NP-Complete. CoRR. ArXiv:2110.11758.
  13. Giovanni Viglietta (2014): Gaming is a hard job, but someone has to do it!. Theory of Computing Systems 54(4), pp. 595–621, doi:10.1007/s00224-013-9497-5.
  14. Oriol Vinyals, Igor Babuschkin, Junyoung Chung, Michael Mathieu, Max Jaderberg, Wojtek Czarnecki, Andrew Dudzik, Aja Huang, Petko Georgiev, Richard Powell, Timo Ewalds, Dan Horgan, Manuel Kroiss, Ivo Danihelka, John Agapiou, Junhyuk Oh, Valentin Dalibard, David Choi, Laurent Sifre, Yury Sulsky, Sasha Vezhnevets, James Molloy, Trevor Cai, David Budden, Tom Paine, Caglar Gulcehre, Ziyu Wang, Tobias Pfaff, Toby Pohlen, Dani Yogatama, Julia Cohen, Katrina McKinney, Oliver Smith, Tom Schaul, Timothy Lillicrap, Chris Apps, Koray Kavukcuoglu, Demis Hassabis & David Silver (2019): AlphaStar: Mastering the Real-Time Strategy Game StarCraft II. https://deepmind.com/blog/alphastar-mastering-real-time-strategy-game-starcraft-ii/.

Comments and questions to: eptcs@eptcs.org
For website issues: webmaster@eptcs.org