ARTICLE AD BOX
A powerful new peer-reviewed paper from experts at the Oxford Internet Institute, University of Oxford, issues a stark warning: as artificial intelligence becomes deeply embedded in the digital lives of children and adolescents, the need for a clear, ethical research framework has never been more urgent.
The study exposes critical gaps in our understanding of how digital technologies shape young minds, arguing that current research is riddled with inconsistencies and blind spots. Through a rigorous examination of these shortcomings, the paper lays bare the hidden complexities and challenges that must be addressed to truly grasp AI’s impact on youth mental health—and to safeguard the next generation in an increasingly AI-driven world.
Please help us out :
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
“Research on the effects of AI, as well as evidence for policymakers and advice for caregivers, must learn from the issues that have faced social media research,” said Dr Karen Mansfield, postdoctoral researcher at the OII and lead author of the paper. “Young people are already adopting new ways of interacting with AI, and without a solid framework for collaboration between stakeholders, evidence-based policy on AI will lag behind, as it did for social media.”
This paper cautions against repeating the mistakes made in social media research when studying the impact of AI. It argues that the current tendency to view social media’s impact as a singular cause neglects the diverse ways people engage with these platforms and the crucial role of context. Without a more nuanced understanding, AI research risks falling into the trap of a new “media panic.” The paper also identifies the challenges of outdated measures of social media use and the frequent exclusion of vulnerable young people from research data.
The authors propose that effective research on AI will ask questions that don’t implicitly problematise AI, ensure causal designs, and prioritize the most relevant exposures and outcomes.
As young people embrace new forms of engagement with AI, research and evidence-based policy face a significant challenge in keeping pace. Nevertheless, by applying the hard-won lessons from past research failures, we can improve our ability to regulate how AI is integrated into online platforms and how young people interact with it.
“We are calling for a collaborative evidence-based framework that will hold big tech firms accountable in a proactive, incremental, and informative way,” said Professor Andrew Przybylski, OII Professor of Human Behaviour and Technology and contributing author to the paper. “Without building on past lessons, in ten years we could be back to square one, viewing the place of AI in much the same way we feel helpless about social media and smartphones. We have to take active steps now so that AI can be safe and beneficial for children and adolescents.”