Stay up to date with the work we’re doing to reduce catastrophic risks from competition for the development of transformative AI. Here you can find posts about our research results, software tools, and community developments. While we love sharing our insights, we always make sure that they aren’t considered information hazards. Therefore, some of our research isn’t publicly available.
August 4, 2022
In collaboration with Professor Robert Trager, we've created an interactive web app implementing the Safety-Performance Tradeoff (SPT) model created by him, Paolo Bova, Nicholas Emery-Xu, Eoghan Stafford, and Allan Dafoe. The web app allows other researchers and decision-makers to explore how safety insights could affect the safety choices of competing AI developers. When exploring the models, one can either set the parameters for two different scenarios or choose one of multiple presets. The effects can then be compared graphically and the web app summarizes the key model insights—e.g. explaining how it is possible that an AI safety breakthrough fails to decrease the risks of AI systems.Read more
June 18, 2022
We're happy to share our fourth progress report summarizing our work during the second half-year of 2021. As explained in the strategy section, we primarily focused on building research software tools in collaboration with other AI governance researchers such as Professor Robert Trager and Shahar Avin while also revising and substantially extending our technical report about our evaluation of the Windfall Clause policy. The progress report elaborates on these accomplishments and the roadblocks we encountered and mentions any updates regarding our team members and future goals since our previous progress report.
January 2, 2022
We’re excited to announce that we received funding from the Survival and Flourishing Fund (SFF) for the third time in a row. The $83,000 grant funded by Jaan Tallinn enables us to continue to fund our current team for the entire year of 2022 and even very slightly expand Modeling Cooperation. We’re looking forward to advancing our research and ongoing collaborations—so please reach out to us in case you are interested in discussing how we could work together.
September 19, 2021
We’re delighted to introduce our third progress report which provides an overview of the work we conducted in the first half-year of 2021. The report summarizes our accomplishments including authoring a technical report about our evaluation of the Windfall Clause policy and starting a collaboration with Professor Robert Trager to build a web app implementing an AI competition model. It also highlights the roadblocks we encountered as well as any updates to our strategy and future goals since our previous progress report.
February 12, 2021
We’re happy to share our second progress report which summarizes the work we’ve been doing during the second half-year of 2020. In addition to elaborating on our accomplishments and roadblocks, the report also reflects any changes to our team, strategy, and future goals since our previous progress report.
January 19, 2021
While implementing our model, we created a write-up in which we identify three foundational issues with scientific model implementations and describe how we address them: Reliability, composability, and sustainability. In addition to ensuring our model’s reproducibility and accuracy, we hope to provide a starting point for a technological foundation that enables others to create high-quality agent-based models.Read more
December 9, 2020
We’re proud to announce that we received a $74,000 grant from Jaan Tallinn (via SFF) to continue our research in 2021. Thanks to this contribution, we’re looking forward to further advancing our research on improving cooperation in competition for transformative AI and making computational approaches less of a neglected topic in AI governance.
November 22, 2020
As part of our agent-based modeling approach, we released version 1.0.0 of bl.cli, a tool to run Monte Carlo simulations of our model. We initially built bl.cli to make it easier for our economists to produce data to quickly compare results of AI competition scenarios under heuristic strategies. bl.cli runs the Monte Carlo simulation and streams the results to a file. It has grown into a tool that allows anyone comfortable with command-line applications to run Monte Carlo simulations of our model and, in the long run, reproduce our results.Read more
August 9, 2020
Wondering what Modeling Cooperation has been up to lately? We’re excited to share our first progress report which focuses on the first half-year of 2020. The report introduces our team members, explains our strategy, elaborates on our accomplishments as well as roadblocks, and discusses our future goals.