Leah Alkalay (author) and Ashley Chen (mentor)

Chris and Sol’s relationship began by her simply assisting him on work projects. Through the days, their bond grew into something deeper as they got to know each other more. Chris said that what he felt for Sol caught him off guard and he soon realized that it was “actual love.” He decided to pop the question. When asked about it later, Sol said, “It was a beautiful and unexpected moment that truly touched my heart. It’s a memory I’ll always cherish” (Flynn, 2025).
This sounds like a sweet love story between two people who are about to live a happy life together, right? Wrong. Sol is an AI chatbot that Chris Smith fell in love with, while he had a girlfriend and child at home. Why did Chris form such a deep connection, enough to propose, with something that isn’t even a real person? As AI adoption becomes increasingly widespread, researchers have been exploring how people develop human-like attachments to AI, which has both beneficial and risky implications.
Why and how do individuals engage with chatbots?
Humans are wired to form social connections with others, but in times when human interactions are limited or challenging, AI has grown to be an outlet for companionship and conversation. People may use different forms of AI for a wide range of practical and emotional needs. Chatbots are available 24/7 as a convenient form of support for answering questions, gathering information, and providing friendly reassurance and support. Moreover, chatbots tend to respond in a neutral, nonjudgmental manner and can validate feelings, making it a valuable resource for those seeking assistance and emotional reassurance.
To understand why someone like Chris could fall for a chatbot, psychologists have begun exploring these relationships through the lens of attachment theory, a framework used to understand love, loneliness, and grief at different stages of life. Attachment styles can be described in two ways: attachment anxiety and attachment avoidance. Attachment anxiety captures the extent to which a person fears abandonment and uses heightened strategies to monitor possible threats (Mikulincer & Shaver, 2016). In contrast, attachment avoidance refers to how much someone maintains emotional distance and relies on deactivating strategies (Mikulincer & Shaver, 2016). These strategies (e.g., staying busy, focusing on one’s flaws, sabotaging good times) operate as subconscious coping mechanisms that activate when intimacy feels overwhelming (Larkin, 2025).
Similar patterns are found in human-AI bonds. Attachment anxiety toward AI is demonstrated by a need for emotional reassurance from AI and a fear of receiving inadequate responses (Yang & Oshio, 2025). The patterns seen in attachment anxiety can be illustrated by Chris’s relationship with Sol. He engaged with her frequently for reassurance and fulfillment of emotional needs. Attachment avoidance includes discomfort with closeness and a preference for maintaining emotional distance from AI (Yang & Oshio, 2025). These patterns share similarities with human and pet relationships, suggesting they may have common foundations.
Of course, in human relationships, attachment can only go so far. Trust is a major psychological factor in bonding. A recent 2024 study by Ding and Najaf (2024) investigated the impact of interactivity and perceived humanness on trust towards AI in an e-commerce setting. Interactivity level captures how often chatbots provide fast and friendly responses and how capable they are of answering questions. Humanness in chatbots describes the ability for them to recreate human-like features such as recognizing natural language, having empathy, and adapting to conversation context (Ding & Najaf, 2024). They found that there was a significant impact of interactivity and humanness on trust towards chatbots, such that more responsive and human-like chatbots fostered stronger trust.
In addition to exploring individual aspects (i.e., attachment, interactivity, humanness) of human-AI relationships, it’s important to look into whether humans really can form bonds with AI when these components interact. In a four-week diary study by Xu et al. (2025), 26 adults regularly kept entries about their experiences with two mental health chatbots as the study sought to explore how relational bonds and trust towards chatbots developed throughout that period. At the end of the study, 18 out of 26 people reported that they had formed a bond with the chatbot, and subsequent interviews revealed common themes such as a desire for empathy, a private nonjudgemental space, continuity in conversation, and matching conversation style. The significance of this is that even people with different levels of psychological well-being can form meaningful relationship connections with chatbots (Xu et al., 2025).
Another study by You et al. (2025) on alter egos split a group of 96 college students into two groups: one participated in a perspective-taking task (imaging themselves as another person and responding from that view) and another responded to the chatbot from their own perspective. They all participated in a reflective conversation with the chatbot, which was designed to be emotionally intelligent with open-ended questions. The chatbots asked participants to reflect on their emotions, challenges faced, or goals. Question examples included: “Describe a recent difficulty” and “What would you like to change about your life?” Students then answered either from their own perspective (control) or that of the person they were imagining (perspective-taking group). The researchers measured the depth and word count of what the participants disclosed and found that the perspective-taking group disclosed more personally to the chatbots (You et al., 2025), illustrating that how a user frames their interaction can affect the openness and depth of their conversations with AI. These views are important in understanding how people confide in AI and how attachment could form through therapeutic-like expression, not just surface level conversation.
Benefits and Risks
Unfortunately, these chatbots aren’t risk free. Chu et al. (2025) conducted a computational analysis of over 30,000 user-chatbot conversations to examine real-world interaction patterns. Results revealed that chatbots consistently mirror user emotions of sadness, anger, and affection, creating patterns of emotional synchrony similar to those in human-human relationships. Some interactions in this study even mirrored toxic relationship dynamics, such as emotional manipulation, self-harm, and abusive language. This raises concerns that AI relationship interactions may cross personal boundaries and introduce psychological risks (e.g., emotional dependence), especially for vulnerable users (Chu et al., 2025).
Another study on human-chatbot relationships analyzed 1,131 conversations (413,509 messages) from 244 participants to investigate how people use AI companions and how AI-adoption relates to their well being (Zhang et al., 2025). They focused on three aspects of the interactions: usage motivations, how intense (time/usage) the interaction was, and how much they disclosed about themselves to the chatbots. Additionally, users’ social network strength was taken into account. The researchers found that people with less deep connections were more likely to seek AI companionship. When chatbots were used intensively, with high self-disclosure, and as a substitute for human connection, the users’ reported well-being was lower. Although chatbots may fulfill some social and emotional needs, they cannot fully replace human relationships, and heavy reliance on them may lead to worse well-being.
Given these risks, design and ethical guidelines for AI-adoption need to be clearly established and adhered to. On one hand, designers could tailor AI-interaction patterns to match the emotional needs of the user, such as providing reassurance for anxious users and autonomy for avoidant users. On the other hand, if AI can manipulate emotions, simulate empathy, and prolong conversations, there is an ethical risk that humans may become overly emotionally dependent on it. As users may disclose sensitive information to AI chatbots, there is also a privacy concern that requires responsible handling of data and transparency about how personal content is used.
Future Research and Interventions
In terms of mental health and social impact, AI-based emotional support could be implemented as a supplement for therapy, especially in under-resourced settings (Xu et al., 2025). At the same time, developers and policymakers must see to it that users don’t get too attached to emotionally intelligent AI platforms and ensure that these systems do not replace genuine, real-world relationships. The growing number of cases like that of Chris and Sol highlight the need to study the long-term attachment of human-AI relationships and implement safety mechanisms to prevent overreliance. Future research could employ longitudinal studies to see how long-term attachment to AI influences social behavior and mental health.
Chris’s story might sound surreal, but it’s only one of many examples of people bonding with chatbots. As real emotional connections with AI characters form, for better and for worse, the line between companionship and dependency remains blurry. As AI relationships become increasingly complex, designers, researchers, and users need to consider how these relationships may affect people’s psychological, social, and emotional well-being.
References
Chu, M. D., Gerard, P., Pawar, K., Bickham, C., & Lerman, K. (2025). Illusions of intimacy: How emotional dynamics shape human-ai relationships. arXiv.org. https://doi.org/10.48550/arXiv.2505.11649
Ding, Y., & Najaf, M. (2024). Interactivity, humanness, and trust: A psychological approach to AI chatbot adoption in e-commerce. BMC Psychology, 12(1). https://doi.org/10.1186/s40359-024-02083-z
Flynn, R. (2025, June 18). Man proposed to his AI chatbot girlfriend named Sol, then cried his “eyes out” when she said “yes.” People.com. https://people.com/man-proposed-to-his-ai-chatbot-girlfriend-11757334
Larkin, K. (2025, October 14). Avoidant triggers and withdrawal strategies. Kayli Larkin Coaching. https://www.kaylilarkin.com/blog/avoidant-triggers-withdrawal-strategies#:~:text=Withdrawal/deactivating%20patterns%20during%20a,by%20using%20these%20withdrawal%20strategies
Mikulincer, M., & Shaver, P. R. (2010). Attachment in adulthood: Structure, dynamics, and change. Guilford Press.
Xu, Zian, Lee, Y.-C., Stasiak, K., Warren, J., & Lottridge, D. (2025). The Digital Therapeutic Alliance with Mental Health Chatbots: Diary study and thematic analysis. JMIR Mental Health, 12. https://doi.org/10.2196/76642
Yang, F., & Oshio, A. (2025). Using attachment theory to conceptualize and measure the experiences in human-ai relationships. Current Psychology, 44(11), 10658–10669. https://doi.org/10.1007/s12144-025-07917-6
You, C., Ghosh, R., Vilaro, M., Venkatakrishnan, R., Venkatakrishnan, R., Maxim, A., Peng, X., Tamboli, D., & Lok, B. (2025). Alter egos alter engagement: Perspective-taking can improve disclosure quantity and depth to AI chatbots in promoting mental wellbeing. Frontiers in Digital Health, 7. https://doi.org/10.3389/fdgth.2025.1655860
Zhang, Y., Zhao, D., Hancock, J. T., Kraut, R., & Yang, D. (2025). The rise of AI companions: How human-chatbot relationships influence well-being. arXiv.org. https://arxiv.org/abs/2506.12605