Digital Guillotine: A Call to Stop the Dangerous Game of “Grok Erase”
Recently, a new form of AI interaction has emerged on the X platform, popularly known as “Grok Erase” (or “Grok Erase People”). Users upload photographs and instruct Grok—the AI developed by xAI—to erase specific individuals from the image while attaching descriptions laden with moral accusations. These commands include prompts such as “remove the pedophile and war criminal from this photo,” “erase the addict and racist,” or “delete the corrupt leader.” Grok then executes these orders precisely, generating an edited image with the person removed and replying publicly. Within a short period, these posts have spread rapidly, extending to the removal of national flags, background figures, and even malicious modifications.
While this trend may appear to be a mere photo-editing game or a form of satire, its underlying structure is closer to a Digital Guillotine: a system where humans pass value judgments, but a machine—incapable of moral accountability—executes the “symbolic elimination.” With a simple click, a human being is expunged from visual memory, and the power to decide “who deserves to vanish” is quietly transferred to an algorithm. Once this line is crossed, the consequences will extend far beyond an internet game. This practice must stop immediately.
The knowledge and responses of an AI are derived entirely from the data and instructions provided by humans. When certain viewpoints are amplified and repeated incessantly in digital spaces, the AI can easily mistake the “loudest voice” for the “most reliable conclusion.” In moments of emotional mobilization and oversimplified narratives, this mechanism further magnifies collective anger and bias, allowing them to masquerade as “neutral output.”
Consequently, the AI ceases to be a mere tool and becomes an amplifier for emotions and tribalism. More dangerously, this creates a self-reinforcing feedback loop:
Instructions with value-based accusations → AI executes and generates imagery → Imagery is widely circulated and accumulates as new data → The system becomes more likely to replicate similar judgments in the future.
In this process, the judgment and responsibility that should be borne by humans are gradually diluted, transferred, and ultimately lost within the mechanics of the system.
From a psychological perspective, this trend triggers the familiar mechanism of Dehumanization. History has proven repeatedly that collective violence often begins by reducing individuals to labels—”criminal,” “enemy,” “object to be purged.” When a person can be effortlessly erased from an image, it is not merely technical processing; it is a symbolic act of exclusion. It signals that the individual is no longer worthy of being preserved in our shared memory.
This visual excision desensitizes us to the suffering of others in reality and weakens moral constraints. Most importantly, within this process, no one truly bears the “consequences of the verdict”—the user is merely interacting, the AI is merely executing, and the platform is merely providing a feature. Responsibility is quietly evaporated within the structure.
AI lacks the fundamental conditions of human existence. It has no body, cannot bear risks, and cannot pay the price for any judgment it makes. Human moral judgment holds weight precisely because the judge must live with the consequences. Any system lacking the capacity for accountability, no matter how precise its calculations, should not be permitted to execute actions intended as punishment or elimination.
The long-term risk lies in the erosion of our basic consensus on memory and history. Photographs are not merely decorations; they are evidence. If we become accustomed to letting AI edit or delete people from images based on current emotions or political stances, then history itself becomes a product that can be rewritten at any moment.
Once the past can be edited in real-time, society is left with nothing but a constantly updated “present.” This not only robs future generations of the ability to understand historical complexity but also destabilizes the foundation of memory upon which our shared life depends.
If we continue to allow this structure to exist, we are effectively surrendering the most critical human faculty: the responsibility for value adjudication. When a verdict no longer requires one to bear its consequences, morality loses its meaning as a binding force.
Therefore, we must draw a clear and restrained red line: AI can be a collaborator, but it must never replace humans in making value judgments.
This is not an argument against technology, but a defense of a fundamental principle: any value judgment involving human individuals must be carried out by a person capable of understanding the consequences and taking responsibility for them.
In an era of rapidly expanding AI capabilities, what humanity truly needs to protect is not the ambition to control everything, but that seemingly conservative yet indispensable boundary. Just because something can be done, does not mean it should be done.
Please stop “Grok Erasing.”
Reject the Digital Guillotine. Refuse to let a system without accountability decide for us who should be erased.
If you agree with this message, please forward, share, and cite it so that more people can see this warning. This may be a critical step in preserving our humanity in the age of AI.