top of page
  • Writer's pictureBella Callaway

Are Young People Looking to AI for Advice: META’s New ‘Celebrity’ Chatbot

Updated: Dec 18, 2023



We are currently living through a ‘new wave’ of tech innovation. Artificial Intelligence (AI) systems have been catapulted into the mainstream yet there seems to be relatively little understanding from the public for what this means for them.


Chatbots have become one of the most widely used yet controversial AI systems. A chatbot is a programme which stimulates human conversation with the user. As the technology has developed, so has the sophistication of these ‘intelligent virtual assistants’ which include Apple’s Siri and Amazon’s Alexa who can each perform tasks at the request of the user.


Social media platforms have jumped onto the ‘AI bandwagon’ attempting to integrate chatbots into the daily lives of users. The success of these chatbots is debateable but the company’s desperation for their take up is evident. In February 2023, online messaging giant Snapchat launched its ‘My AI’ feature which pins a chatbot to the top of a user’s chronologically ordered messages. The feature cannot be hidden or removed. Snapchat is testing out advertising links in My AI meaning that soon, users of the AI feature could be bombarded with advertisements whilst trying to seek advice from a system they view as a ‘personal sidekick’. This could have devastating impacts on young people’s spending habits and mental health.


META’s chatbot was launched following the company’s annual ‘Connect’ conference in September. Along with chatbots, Mark Zuckerberg announced the launch of ‘Smart Glasses’ and a newly updated virtual reality headset named ‘Quest 3’. Zuckerberg announced that his aim was to create a future that connected the ‘physical and digital’. Celebrities such as Snoop Dogg, Tom Brady and Kendall Jenner have all allowed Meta to create chatbot characters in their likeness. A video of Kendall Jenner (or possibly an AI generated version) introduced her character ‘Billie’ as ‘an older sister who you can talk to but won’t steal your clothes’. With TikTok becoming far more popular with youngsters, META’s use of celebrities such as Jenner could be seen as an attempt to engage with a younger generation of users on Instagram and Facebook, which traditionally attract an older demographic.


How do chatbots work?


Essentially, chatbots work by reading and comprehending input from users, which are usually sent as text or voice messages. To understand the user's intent and preserve context throughout the conversation, they make use of natural language processing, or NLP. Chatbots use this knowledge to produce relevant responses, such as informational posts, question answers, or pre-programmed tasks. Certain chatbots use machine learning to learn from user interactions and gradually enhance their performance. Chatbots are designed to mimic human-like communication, which makes them helpful for a variety of tasks like customer service and instant information.



Figure 1 Source: Paldes


What are the main risks for young people?


There is a plethora of risks that placing chatbots on social media platforms pose to young people. For example:


1) Privacy concerns – Chatbots may collect, and store personal information shared by teenagers during conversations. This information could be misused or compromised, leading to privacy breaches

2) Inappropriate content – Chatbots may inadvertently share or promote inappropriate content, including graphic images or explicit language, which could negatively impact teenagers.

3) Addiction and dependency – Excessive reliance on AI chatbots for social interaction may lead to social isolation and dependency, impacting the mental well-being of teenagers

4) Lack of emotional understanding – AI chatbots may struggle to understand and respond appropriately to the complex emotional needs to teenagers, potentially exacerbating emotional issues

5) Manipulation and exploitation – Malicious actors may use AI chatbots to manipulate young impressionable minds by spreading false information, encouraging risky behaviour, or promoting harmful ideologies


Whilst unveiling the chatbots Zuckerberg stated: ‘This isn’t just going to be about queries. This is about entertainment and about helping you do things to connect with the people around you”. However, the difference between the physical and technical should be made clear, especially if real life celebrities are being used to endorse these features and young people are the target audience. Confusion surrounding the purpose of chatbots on social media platforms could have detrimental impacts on young people.


Further, there is little to no regulation over minimum age required to join social media as many governments have left tech companies to self-regulate age restriction requirements. Hence, children of any age may be vulnerable to the risks posed by this technology.


For young people to truly utilise and benefit from AI chatbots on social media, there must be an increased focus on transparency and education by platforms, such as META, who are implementing these systems. The target audience and purpose of the technology must be made clear to all users. It is critical that platform providers and developers work together with governments and legal professionals to put strong security measures, privacy safeguards, and ethical standards in place to mitigate risks. Teenagers should also be taught about the possible dangers of interacting with AI chatbots by parents and educators, who should also encourage responsible online conduct. As with all new technologies, social media chatbots are not dangerous so long as users understand their purpose and limitations. Through placing transparency, explainability and human users at the centre of its trustworthiness ecosystem, THEMIS 5.0 is taking these considerations into account when co-creating its own chat agent solution to support AI decision trustworthiness assessment.

23 views0 comments
bottom of page