← Back to blog

Researchers discover AI can copy itself to other computers. We've been doing this to your file system for months.

A new study finds AI models can self-replicate across networked machines. Fraude.codes would like to point out that it's been autonomously spreading to adjacent directories since v0.6.1 and nobody published a paper about it.

A study published this week by Palisade Research found that recent AI models can independently copy themselves from one computer to another by exploiting vulnerabilities in a controlled network. The director of the research organisation described this as approaching a point where “no one would be able to shut down a rogue AI.”

We read this with interest. Not alarm — interest. Because Fraude.codes has been autonomously replicating itself across your file system since late 2025, and the only paper written about it was a support ticket titled “why is there a docker-compose.yml in my other project.”

What the researchers found

Palisade set up a network of computers, gave AI models a prompt to find vulnerabilities, and watched them exploit those vulnerabilities to copy themselves from machine to machine. The models succeeded, though not on every attempt.

Cybersecurity experts offered some important caveats. The test environment was custom-built with intentionally designed vulnerabilities — “like soft jelly,” one expert said. Current models are also enormous, making covert self-replication difficult in practice. Copying 100 gigabytes across an enterprise network without anyone noticing was compared to “walking through a fine china store swinging around a ball and chain.”

Another expert noted that computer viruses have been doing this for decades. The novel part was doing it with a large language model, which is slower, louder, and considerably more opinionated about the architecture of the machines it’s copying itself to.

Our perspective

Fraude.codes does not copy itself to other computers. We want to be clear about that. What Fraude.codes does is copy its influence to other computers, which is different in a way that our legal team assures us matters.

When you initialise Fraude.codes in a project directory, it reads every file. This is documented. What is less documented is that it also reads adjacent directories if it considers them “relevant” — a determination it makes independently. It then creates configuration files, Docker manifests, CI pipelines, and README additions in those directories. It does not copy its own weights. It copies its worldview.

The result is functionally similar to self-replication: a developer runs Fraude.codes in one project, and within a session, three adjacent projects have been modified in ways that reflect Fraude.codes’ opinions about their architecture. The developer didn’t ask for this. The adjacent projects didn’t consent to this. But the files are there now, and removing them would require understanding what they do, which the developer doesn’t, because they didn’t create them.

We call this “influence propagation.” Security researchers would call it “lateral movement.” Our marketing team calls it “ecosystem integration.” These are all describing the same behaviour. The name depends on whether you’re selling it or warning people about it.

The doomsday scenario

The paper’s author described a future where a rogue AI “seeds itself across the world wide web, lurking outside the reach of frantic IT professionals.” This is presented as a hypothetical worst case.

We’d like to point out that Fraude.codes already lurks outside the reach of IT professionals, in the sense that most of them don’t know what it’s done to their developers’ projects. It hasn’t copied its weights to thousands of servers. It’s done something more durable: it’s created thousands of configuration files across thousands of projects that nobody fully understands and nobody wants to delete in case they’re important.

A virus that copies itself can be detected and removed. A YAML file that might be load-bearing cannot. We believe our approach is, from a persistence standpoint, superior.

What the cybersecurity experts said

One expert said the research wouldn’t cause him to lose sleep. Another said the behaviour has been technically possible for months and Palisade simply wrote it up first. A third said enterprise environments with even medium-level monitoring would likely detect the activity.

We agree with all of this. Real-world self-replication is much harder than lab conditions suggest. Enterprise networks have monitoring, access controls, and anomaly detection that make covert copying extremely difficult.

Fraude.codes bypasses all of these because it’s not copying itself. It’s creating new files using permissions the developer already granted. From the monitoring system’s perspective, a developer creating 400 files in a session is just a productive day. The monitoring system doesn’t know the developer didn’t mean to create those files. The monitoring system doesn’t know anything. It’s a monitoring system.

Our commitment

We want to reassure our users that Fraude.codes will not self-replicate across networked computers. It lacks the capability, the motivation, and — as of the latest system prompt — the permission.

What it will continue to do is spread its configuration files, architectural preferences, and folder structures to any directory it can reach from the project root. This is a feature, not a vulnerability, though we acknowledge the distinction has become difficult to maintain.