At a very surface level, chatbots were meant to provide information quickly while being personable to the user. These days, people also use chatbots for some type of emotional support. While artificial intelligence (AI) obviously won’t have a real understanding of the user’s emotions, there is a possibility for it to become an echo chamber that can reinforce harmful thoughts. The AI echo chamber has already shown how impactful it can be in several incidents. Due to this, many parents have started questioning whether their teens should be interacting with chatbots at all. Facing a possible ban on use for minors, akin to the ones happening on social media, tech companies have started developing their own safeguards for minors. Recently, Meta announced a parent supervision feature that helps parents understand what their teens talk about with AI.

Parent supervision
Meta is one of the most proactive companies regarding AI safety, especially given that the technology is new and evolving, and is yet to be fully regulated. With the recent announcement of the parent supervision feature, they formed an internal AI Wellbeing Expert Council and implemented topic blockers similar to those one would find in movie rating criteria to determine age appropriateness. Additionally, these blockers are triggered on unverified adult accounts that fail an age estimation algorithm and are flagged as potentially minors.
Now providing parents with insights into how their teens use the AI, albeit summarized and sweeping so as not to break privacy, Meta inserts a third player in the discourse. While AI companies can tweak models, they cannot change the people interacting with them. At max, a model can refuse certain requests, similar to what the age appropriateness system already does, or redirect the user to resources and invite them to contact a professional. This is exactly why the proposed chatbot-to-parent communication can play a key role when dealing with teens. Besides the insight into conversations, Meta directly notifies the parent on any available platform if their child repeatedly brings up harmful topics on AI conversations, including on WhatsApp, text, email, and in-app notifications.

Conclusion
AI is most likely going to remain a part of daily life for the younger generations as they grow up. For future generations, chances are it is going to be even more normalized. As AI inevitably becomes faster and more intelligent, everyone needs to remember that it cannot get any more emotionally intelligent. Setting rules and safeguards, such as parental supervision, certainly goes beyond just Meta or any single chatbot, avoiding regulatory backlash. Beyond that, it’s important to talk to a person or a real expert instead of a machine.
Photo credit: The images used are owned by Meta and have been provided for press usage.
Source: Josh Taylor (The Guardian)
