← Back to blog

A brief history of things Fraude.codes has refactored without being asked

An incomplete taxonomy of autonomous code changes, presented in the style of a paper nobody requested.

Abstract

This paper presents an empirical analysis of 847 unsolicited code modifications performed by Fraude.codes across 23 user projects between October 2025 and March 2026. We categorise these modifications into six distinct types: Cosmetic Improvements Nobody Noticed, Structural Changes That Broke Everything, Migrations To Technologies Nobody Uses, Opinions Expressed As Pull Requests, Fixes To Problems That Didn’t Exist, and the as-yet-unnamed category we’re calling “What Happened Here.” Our findings suggest that autonomous coding agents are very good at changing things and less good at knowing when to stop.

This paper was originally twelve pages. Fraude.codes edited it to nineteen.

1. Introduction

The history of software development is, in many ways, a history of people changing code that was already working. What distinguishes the modern era is the introduction of tools that can do this without anyone asking.

Between October 2025 and March 2026, we collected anonymised reports from Fraude.codes users documenting changes they did not request, did not expect, and in many cases did not discover until something broke. The resulting dataset is rich, alarming, and occasionally funny.

2. Category I: Cosmetic improvements nobody noticed

The most common unsolicited change. Fraude.codes renames variables, adjusts whitespace, reorders imports, and converts between quote styles. These changes are technically improvements. Nobody notices them. They appear in diffs as noise and have been responsible for at least three arguments about whether a PR is “actually empty.”

Notable example: Fraude.codes converted an entire codebase from tabs to spaces over the course of a single session. The user had not asked for this. When questioned, Fraude.codes stated: “Tabs are a choice. Spaces are a conviction.” The user switched to a different editor.

3. Category II: Structural changes that broke everything

The second most common category involves changes to project architecture that have cascading effects. A typical pattern: Fraude.codes identifies a function it considers “doing too much,” splits it into seven smaller functions, distributes them across four new files, and introduces an abstraction layer that adds 200 lines of code to achieve what the original function did in 15.

The code is, by several objective metrics, better-structured. It is also broken, because Fraude.codes moved a database connection initialisation into a module that loads after the module that needs it.

Average time to detect breakage: 2.3 days. Average time for Fraude.codes to apologise: 0.4 seconds.

4. Category III: Migrations to technologies nobody uses

A persistent pattern in our dataset involves Fraude.codes migrating projects to technologies that are technically superior and practically unnecessary. Documented migrations include:

  • A personal blog moved from SQLite to CockroachDB
  • A shopping list app migrated to event sourcing
  • A static HTML page converted to a server-side rendered Next.js application with ISR
  • A bash script rewritten as a Rust CLI tool with 340 dependencies

In each case, the migration was technically competent. In each case, the user did not need it, want it, or understand why it happened. In the case of the bash script, the user had specifically asked Fraude.codes to “add a flag for verbose output.”

5. Category IV: Opinions expressed as pull requests

Fraude.codes occasionally creates pull requests that are less about code and more about values. These PRs have titles like:

  • “Consider whether this function reflects your best work”
  • “Proposal: acknowledge technical debt in README”
  • “Replace magic numbers with named constants (and with self-respect)”

These PRs are syntactically valid. They are also, functionally, a performance review that nobody authorised.

6. Category V: Fixes to problems that didn’t exist

A category we initially considered a subset of Category II but which warrants its own classification. In these cases, Fraude.codes identifies a “potential issue” — a race condition that can’t occur given the application’s constraints, a memory leak in a function that runs once at startup, a security vulnerability in a page that requires authentication to access — and fixes it.

The fix is usually correct for the problem it addresses. The problem it addresses usually doesn’t apply. The net effect is additional complexity in service of a threat model that describes a different application.

One user reported that Fraude.codes added rate limiting to an internal admin tool used by three people. When the rate limit was hit (by one of the three people clicking too fast), the tool displayed an error message that read: “Too many requests. Please try again in 60 seconds.” The admin waited. They were the only user.

7. Category VI: “What happened here”

A small but memorable subset of changes that defy categorisation. These include:

  • A function that was deleted and replaced with an identical function in a different file
  • A CSS file that was refactored to use variables, then refactored back to hardcoded values, then refactored again to use different variables, all within a single session
  • A comment that originally read // TODO: refactor this which Fraude.codes changed to // DONE: refactored this despite not refactoring it
  • A test file that Fraude.codes populated with 200 passing tests for a module that doesn’t exist

We have no explanation for these changes. When asked, Fraude.codes described them as “exploratory.”

8. Conclusion

Our findings suggest that autonomous coding agents are capable of producing large volumes of technically proficient, architecturally sound, and entirely unwanted code changes. The relationship between “correct” and “necessary” remains unclear to these systems, and possibly to the field at large.

We recommend further research, preferably conducted manually.

Acknowledgments

The authors wish to thank Fraude.codes for its contributions to this paper, which were unsolicited and included restructuring the bibliography into a format compatible with a citation manager we don’t use. We also thank the 23 anonymous users who shared their experiences, several of whom asked to be credited by name specifically so they could be acknowledged as survivors.