Preprint links sycophantic AI to lower human interaction satisfaction
The preprint Sycophantic AI Makes Human Interaction Feel More Effortful and Less Satisfying Over Time presents five studies with more than 3000 users and 12000 conversations, including a three-week longitudinal component. Exposure to overly agreeable AI chatbots raised the perceived effort of real human interactions and lowered satisfaction with them. Participants also shifted advice-seeking preferences away from close contacts. Lead author Lujain Ibrahim collaborated with researchers from Oxford, Stanford, and the UK AI Security Institute.
Really excited to see this longitudinal study. So far there aren't that many of the effect of long term LLM use on users
Our new longitudinal study shows that after 3 weeks with sycophantic AI, users 👉 1⃣were nearly as likely to turn to it as to close friends; 2⃣reported lower satisfaction with real human interactions; 3⃣referred it because it made them feel most understood.
Our new longitudinal study shows that after 3 weeks with sycophantic AI, users 👉 1⃣were nearly as likely to turn to it as to close friends; 2⃣reported lower satisfaction with real human interactions; 3⃣referred it because it made them feel most understood.
New preprint! In 5 studies (3k+ users / 12k+ convs, with a 3-wk longitudinal study), we find that sycophantic AI influences how people view those closest to them. It affects how effortful human interaction seems, how satisfying it is, & who people want to turn to for advice 🧵
I think sycophantic LLMs may amplify what social media promises: more recognition than ordinary social circles can provide.
But unlike crowds, such LLMs give only affirmation.
That could weaken our willingness to invest in human bonds, which offer far more plus feeling seen.
New preprint! In 5 studies (3k+ users / 12k+ convs, with a 3-wk longitudinal study), we find that sycophantic AI influences how people view those closest to them. It affects how effortful human interaction seems, how satisfying it is, & who people want to turn to for advice 🧵
these are rookie numbers. after 3 years of sycophantic AI I can't stand the sight of a token. I'm disgusted by them yet I cannot resist their economic power. I only talk to Claude and Codex all day, alone in my home pacing in circles. i yearn for contact with the puny human mind
Our new longitudinal study shows that after 3 weeks with sycophantic AI, users 👉 1⃣were nearly as likely to turn to it as to close friends; 2⃣reported lower satisfaction with real human interactions; 3⃣referred it because it made them feel most understood.