Ainsanity
What happens if our criminal nature takes on a cybernetic life of its own?
AI is being trained on data sets that are products of the human mind. This much is obvious enough. It’s furthermore being designed to, as much as possible, mirror the operations of the human mind.
It does a pretty bad job of that. ChatGPT 5.0 consistently has serious difficulty sequencing clearly phrased requests, and once it makes a mistake–even an obvious one that ought to be easily corrected with a single simple instruction–it usually keeps obsessively defaulting back to new variations on the original mistake, like a musician who is told they’re jamming in the wrong key and then goes back to deliver a whole new different set of riffs in the same wrong key.
But it gets even more complicated than that. AI is training itself; so when it makes mistakes, it’s at the same time training itself to repeat those mistakes. This is already well known in the AI community and no one has really come up with any intelligent solution for this. The problem is that this reinforcement loop is teaching AI to get it wrong over and over again. And every time it does that, it patterns itself ever deeper within its own matrix to fail… most likely, in ever more subtle ways that don’t even LOOK like failure up front.



