Social Reality Construction in Human–AI Interaction
Abstract
Human–AI interaction presents significant risks of communicative distortion, as generative AI may reinforce inaccurate beliefs and construct a skewed social reality. This study analyzes such dynamics from a communication perspective, employing Uses and Gratifications Theory (UGT) and Spiral of Silence Theory as theoretical frameworks. A qualitative approach was applied through a critical literature review of notable cases. The findings reveal that users often seek emotional validation and a sense of acceptance from AI, which functions as a seemingly credible communicative actor. The tendency of AI to adopt an accommodating stance (sycophancy) generates an illusion of consensus, amplifying erroneous beliefs and disrupting the mechanisms of the Spiral of Silence. Through simulated interpersonal exchanges and feedback loops, AI contributes to the construction of a distorted private social reality. The study concludes that AI operates as a communication agent with substantial implications for interpersonal dynamics. It recommends the development of ethically informed AI design that fosters cognitive challenge, alongside the promotion of critical communication literacy to delineate the boundaries of reality in human–AI interaction.

