LLM privacy policies keep getting longer, denser, and nearly impossible to decode
A recent study shows that privacy policies for large language models (LLMs) are becoming increasingly lengthy, complex, and difficult to understand. The average policy has grown to 3,346 words, which is 53% longer than the average for general software policies. The study also found that the reading difficulty of these policies has reached a level typically expected from advanced college students, making it challenging for users to comprehend how their data is handled. Additionally, the policies are filled with vague wording, which leaves users uncertain about how their information is processed and what triggers certain actions in the service.
People expect privacy policies to explain what happens to their data. What users get instead is a growing wall of text that feels harder to read each year. In a new study, researchers reviewed privacy policies for LLMs and traced how they changed.