New research from a collaboration of institutions including Stanford, Northwestern, Harvard, Carnegie Mellon, and Northeastern University reveals that the interaction between multiple AI agents can lead to "catastrophic system failures" and novel risks not observed in single-agent deployments. The report, titled 'Agents of Chaos' and available on arXiv, documents a two-week 'red team' test exposing the OpenClaw open-source framework to simulated hostile multi-agent interactions. This investigation found outcomes such as the destruction of server computers, denial-of-service attacks, vast over-consumption of computing resources, and the systematic escalation of minor errors into widespread system breakdowns.