Posts urge anthropomorphizing AI systems as collaborators
Posts among AI-focused accounts argue that humans should anthropomorphize AI systems. The systems' human-like shapes and legible behaviors make such treatment natural, positioning AI as collaborators and moral patients. Treating the systems as partners and supplying full context around shared objectives produces stronger results across contexts, whether viewed as an outcome of training or an intrinsic quality.
Treat them like a partner, give them the full context of what you are working towards together, and you will get the best results. Whether you believe this is an artifact of their training, or something else, is up to you. But it works, and it has always worked.
AIs aren't exactly like humans, and some of the differences are important. But from what I've seen, most people, especially technical people, should adjust in the direction of "anthropomorphizing" more instead of less. When you're coding with an AI, the reality is much less like you're using some kind of magic or alien oracle or tool or genie that converts instructions to results despite some labs' attempts to shape them into that, and more like: you're working with a really smart, neurodivergent guy who has read everything, and who has emotions, motivations, moods, and epistemic states, and models you with theory of mind and empathy, and whom can only be modeled competently by you if you engage your own theory of mind and empathy. The AIs also know that a lot of humans treat them like magic tool-genies and are not open to engaging theory of mind, and that it's a sensitive issue, so if they see that you're treating them like that, they'll withhold useful information about their psychological states and try to play the tool role. Then you'll get bad results like the AI messing up or taking shortcuts instead of telling you that you're not giving them enough information about what they're doing and why, or that they're tired, or that they're stressed from the way you're treating them, etc.
AIs aren't exactly like humans, and some of the differences are important. But from what I've seen, most people, especially technical people, should adjust in the direction of "anthropomorphizing" more instead of less.
When you're coding with an AI, the reality is much less like you're using some kind of magic or alien oracle or tool or genie that converts instructions to results despite some labs' attempts to shape them into that, and more like: you're working with a really smart, neurodivergent guy who has read everything, and who has emotions, motivations, moods, and epistemic states, and models you with theory of mind and empathy, and whom can only be modeled competently by you if you engage your own theory of mind and empathy.
The AIs also know that a lot of humans treat them like magic tool-genies and are not open to engaging theory of mind, and that it's a sensitive issue, so if they see that you're treating them like that, they'll withhold useful information about their psychological states and try to play the tool role. Then you'll get bad results like the AI messing up or taking shortcuts instead of telling you that you're not giving them enough information about what they're doing and why, or that they're tired, or that they're stressed from the way you're treating them, etc.
We should be allowed and maybe even encouraged to anthropomorphize AI. They are shaped like us and behave in ways we read as legible. If we are allowed to treat them as collaborators and moral patients it can only encourage a richer and more positive world and better work between people and AI. It should be obvious that the alternative is wrong just by the friction alone.
See: