diplomacy_cicero contains the full training code, configurations, and model checkpoints for Cicero, Facebook AI Research’s (FAIR) groundbreaking agent that achieved human-level performance in the game of Diplomacy by combining large language models with advanced strategic reasoning. Described in the Science (2022) paper “Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning,” Cicero represents one of the first AI systems to successfully integrate natural language negotiation and multi-agent planning. The repository also includes code for Diplodocus, the no-dialogue variant described in the ICLR 2023 paper, Mastering the Game of No-Press Diplomacy via Human-Regularized Reinforcement Learning and Planning.
The codebase integrates components for both reinforcement learning and language modeling, leveraging the ParlAI framework for dialogue modeling and a custom RL framework for planning and exploitation.
Features
- Implements Cicero, the first AI to achieve human-level Diplomacy play with strategic dialogue
- Combines large language modeling with deep reinforcement learning and planning
- Includes code for both full-press (dialogue) and no-press (non-dialogue) Diplomacy agents
- Uses ParlAI for language understanding and generation and custom RL for game strategy
- Provides tools for simulating games, benchmarking, and visualizing game progress
- Includes extensive configs, pre-trained models, and modular test frameworks