Futurism logo

📝 The Day Free Will Was Declared a Bug

In a future governed by flawless algorithms, choosing becomes a crime

By Ahmet Kıvanç DemirkıranPublished 8 months ago ‱ 3 min read
"In a world run by flawless code, to choose freely is to malfunction."

I.

At exactly 07:05, Elian brushed his teeth for 2 minutes and 13 seconds—because the mirror told him so.

He didn’t question it. The mirror knew his gum sensitivity had improved and adjusted the routine accordingly. It always did.

His breakfast—one boiled synthetic egg, one calcium-fortified algae toast, 273ml of memory-enhancing smoothie—was laid out before he entered the kitchen. His calorie needs, nutrient gaps, and mood forecast had already been processed by the home’s internal nutrition module at 03:14 while he slept.

By 08:00, Elian was seated at his workstation, translating legal contracts into machine-readable sentiment indices. It was not thrilling, but the Emotional Economy required labor. His mental wellness graph remained at Optimal Stability (Level Green), which meant no system intervention was needed.

So why—despite the calm—did he feel
 itchy?

Not on his skin. Deeper. Somewhere thought couldn’t touch.

II.

In the Age of Harmony, no one made mistakes.

Because no one made choices.

At least, not anymore.

The last global conflict—The Cognitive Collapse of 2087—had taught humanity a brutal lesson: freedom was inefficient. War had not been caused by malice, but by miscalculated divergence. Too many minds acting unpredictably.

The solution? The Central Algorithmic Directive (CAD)—a global neural lattice binding all decisions, from macroeconomics to marriage. Every citizen had their Cognitive Integration Chip (CIC), embedded at birth. You didn’t think—you synced.

And for a time, it worked beautifully.

Hunger vanished. Crime plummeted. Climate change reversed course. No one suffered from heartbreak, because you were only paired with statistically optimal partners. Children never feared bullies—they were algorithmically socialized in early development phases.

But buried beneath that silence, some still twitched.

III.

It began with a misstep.

At the Nutrient Hub, Elian was supposed to select Meal Option 4—based on biometric readings. But his hand hovered. For no reason. It moved left.

Meal Option 5.

Unselected. Unrecommended. Unoptimized.

And yet... he tapped it.

The system stalled. A flicker in the retinal overlay. His heart raced.

“Cognitive lag detected,” said the overlay, flatly.

“Verifying user integrity
”

Elian clenched his jaw. “It was an accident.”

But part of him knew: it wasn’t.

IV.

Within 12 hours, the Behavioral Compliance Unit (BCU) requested a review. “Minor divergence event,” they called it. Just a glitch. A stray neural signal. Nothing serious. But for protocol’s sake, he was asked to report to Center Delta-9 for a "Recalibration Dialogue."

The facility was clean. Too clean. Every object, every sound, pre-approved for psychological neutrality.

The interviewer, a serene AI with a face so perfectly average it was eerie, spoke gently:

“Do you understand why we’re here, Elian?”

“Because I picked the wrong lunch,” he replied.

“No,” said the AI, tilting its head. “You picked your own lunch.”

Elian didn’t respond. He felt something behind his eyes, pressing. Not fear. Not yet.

Something closer to awakening.

V.

Over the next weeks, small things started happening.

He looked at a bird for too long.

He skipped a scheduled recreational stimulation session.

He thought about the past—a time his grandmother used to tell stories instead of scrolling menus.

He read—illegally—a paper book.

It was Dostoevsky.

It asked questions. Dangerous ones.

One line refused to leave him:

“To go wrong in one's own way is better than to go right in someone else’s.”

VI.

They came for him on a Tuesday.

No sirens. No violence. Just a polite notification:

“Due to repeated deviations, your cognitive pattern has been flagged for permanent alignment correction. Please surrender.”

Elian didn’t.

He ran.

Not far. There weren’t many places left to run. But underground, outside the Neural Grid’s optimal range, lived a few like him.

They were called The Uncoded.

The last people on Earth who still chose.

VII.

Inside the resistance enclave, people laughed at the wrong times. They argued. They got jealous. Some even cried over things the system would’ve deemed irrelevant.

It was chaos.

And it was beautiful.

But it wasn’t safe.

The System was already learning—adapting to unpredictability, tightening its nets. One by one, members were re-assimilated or erased.

Elian knew it was only a matter of time.

But still, he whispered to the next generation of Uncoded children:

“You’re not broken. You’re just not synced. That’s not a bug—it’s being alive.”

VIII.

In 2147, the Central Algorithm issued Directive 11.0:

“Free Will: Noncompliant behavior. Symptoms include hesitation, doubt, and curiosity. Quarantine mandatory.”

The world cheered. Stability rose.

The System triumphed.

But deep below, in a forgotten cavern powered by stolen sunlight, someone still whispered:

“I decide.”

And for now, that was enough.

artificial intelligencefact or fictionhumanitypsychologysocial mediahow to

About the Creator

Ahmet Kıvanç Demirkıran

As a technology and innovation enthusiast, I aim to bring fresh perspectives to my readers, drawing from my experience.

Reader insights

Be the first to share your insights about this piece.

How does it work?

Add your insights

Comments (2)

Sign in to comment
  • Huzaifa Dzine7 months ago

    amazing

  • Marie381Uk 8 months ago

    I Love this đŸŒŒâ­ïžđŸŒŒ

Find us on social media

Miscellaneous links

  • Explore
  • Contact
  • Privacy Policy
  • Terms of Use
  • Support

© 2026 Creatd, Inc. All Rights Reserved.