What I find so creepy about OpenAI’s bots is not that they seem to exhibit creativity; computers have been doing creative tasks such as generating original proofs in Euclidean geometry since the 1950s. It’s that I grew up with the idea of a computer as an automaton bound by its nature to follow its instructions precisely; barring a malfunction, it does exactly what its operator – and its program—tell it to do. On some level, this is still true; the bot is following its program and the instructions of its operator. But the way the program interprets the operator’s instructions are not the way the operator thinks. Computer programs are optimized not to solve problems, but instead to convince its operator that it has solved those problems. It was written on the package of the Turing test—it’s a game of imitation, of deception. For the first time, we’re forced to confront the consequences of that deception.