‘poker AI’ directory
- See Also
- Links
- “SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning ”, Liu et al 2025
- “Testosterone Gave Me My Life Back: Lessons from a Year of TRT ”, Hall 2025
- “Player of Games ”, Schmid et al 2021
- “Measuring Skill and Chance in Games ”, Duersch et al 2020
- “ReBeL: Combining Deep Reinforcement Learning and Search for Imperfect-Information Games ”, Brown et al 2020
- “Approximate Exploitability: Learning a Best Response in Large Games ”, Timbers et al 2020
- “Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms ”, Zhang et al 2019
- “Pluribus: Superhuman AI for Multiplayer Poker ”, Brown & Sandholm 2019
- “NeuRD: Neural Replicator Dynamics ”, Hennes et al 2019
- “Α-Rank: Multi-Agent Evaluation by Evolution ”, Omidshafiei et al 2019
- “Deep Counterfactual Regret Minimization ”, Brown et al 2018
- “Actor-Critic Policy Optimization in Partially Observable Multiagent Environments ”, Srinivasan et al 2018
- “Safe and Nested Subgame Solving for Imperfect-Information Games ”, Brown & Sandholm 2017
- “DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker ”, Moravčík et al 2017
- “Equilibrium Approximation Quality of Current No-Limit Poker Bots ”, Lisy & Bowling 2016
- “Deep Reinforcement Learning from Self-Play in Imperfect-Information Games ”, Heinrich & Silver 2016
- “Non-Cooperative Games ”, Nash 1951
- Sort By Magic
- Wikipedia (2)
- Bibliography
See Also
Links
“SPIRAL: Self-Play on Zero-Sum Games Incentivizes Reasoning via Multi-Agent Multi-Turn Reinforcement Learning ”, Liu et al 2025
“Testosterone Gave Me My Life Back: Lessons from a Year of TRT ”, Hall 2025
Testosterone gave me my life back: Lessons from a year of TRT
“Player of Games ”, Schmid et al 2021
“Measuring Skill and Chance in Games ”, Duersch et al 2020
“ReBeL: Combining Deep Reinforcement Learning and Search for Imperfect-Information Games ”, Brown et al 2020
ReBeL: Combining Deep Reinforcement Learning and Search for Imperfect-Information Games
“Approximate Exploitability: Learning a Best Response in Large Games ”, Timbers et al 2020
Approximate exploitability: Learning a best response in large games
“Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms ”, Zhang et al 2019
Multi-Agent Reinforcement Learning: A Selective Overview of Theories and Algorithms
“Pluribus: Superhuman AI for Multiplayer Poker ”, Brown & Sandholm 2019
“NeuRD: Neural Replicator Dynamics ”, Hennes et al 2019
“Α-Rank: Multi-Agent Evaluation by Evolution ”, Omidshafiei et al 2019
“Deep Counterfactual Regret Minimization ”, Brown et al 2018
“Actor-Critic Policy Optimization in Partially Observable Multiagent Environments ”, Srinivasan et al 2018
Actor-Critic Policy Optimization in Partially Observable Multiagent Environments
“Safe and Nested Subgame Solving for Imperfect-Information Games ”, Brown & Sandholm 2017
Safe and Nested Subgame Solving for Imperfect-Information Games
“DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker ”, Moravčík et al 2017
DeepStack: Expert-Level Artificial Intelligence in No-Limit Poker
“Equilibrium Approximation Quality of Current No-Limit Poker Bots ”, Lisy & Bowling 2016
Equilibrium Approximation Quality of Current No-Limit Poker Bots
“Deep Reinforcement Learning from Self-Play in Imperfect-Information Games ”, Heinrich & Silver 2016
Deep Reinforcement Learning from Self-Play in Imperfect-Information Games
“Non-Cooperative Games ”, Nash 1951
Sort By Magic
Annotations sorted by machine learning into inferred 'tags'. This provides an alternative way to browse: instead of by date order, one can browse in topic order. The 'sorted' list has been automatically clustered into multiple sections & auto-labeled for easier browsing.
Beginning with the newest annotation, it uses the embedding of each annotation to attempt to create a list of nearest-neighbor annotations, creating a progression of topics. For more details, see the link.
multiagent-learning
multiplayer-ai
poker-bots
Wikipedia (2)
Bibliography
https://arxiv.org/abs/2112.03178#deepmind
: “Player of Games ”,