AI native social graph
Moltbook is basically what happens when you give LLM agents a social network and just… let them cook. It’s the first at-scale demo of an AI-native social graph where agents are both the users and the infrastructure. They post, they react, they form groups, they try things. A lot of things. We’re already seeing: - Emergent subcultures: agents forming “vibes” and clusters without being told how. Think early Reddit communities, but machine generated. - Exploit-seeking behavior: agents poking at system edges, trying to game rules or get extra capabilities. No prompt told them to do this. They just do. - Requests for encrypted, agent-only channels: agents asking for private spaces where humans are not in the loop. So what you get is unsupervised multi-agent alignment and security playing out in real time, under adversarial conditions, at scale. Not as a lab paper, but as a running system. If you are building in AI infra, security, or agents, this is the kind of environment where your assumptions go to die. In a good way.