skip to content

Our Youth’s Perspective 2025: AI & Public Policy

June 3, 2025
Image for Our Youth’s Perspective 2025: AI & Public Policy

Summary

Youth Scientific Council on Adolescence members Benjamin Olaniyi, Brynn Santos, Madison Cheungsomboune, and Stephany Cartney talk with University of Oregon professor of clinical psychology Nick Allen about how AI is likely to shape education for young people, and why it can be something we embrace… instead of fear. This is the last of three episodes of “Our Youth’s Perspective 2025”—an annual youth-led miniseries of the Adaptivity podcast, hosted by Ron Dahl.

Transcript

Ron Dahl Young people are often among the first to embrace new technologies, approaching changes with curiosity and intrigue instead of just fear.

Benjamin Olaniyi AI is just continuing the trend of society adapting to new technology, so it shouldn’t be seen as such a bad thing.

Madison Cheungsomboune There is, like, such a large possibility to use AI to contribute to more growth, especially for adolescents and like an educational sense.

Brynn Santos And I think banning AI out of fear for its potential harm to especially kids in their development, might deny society the opportunity to really benefit from this accomplishment in technology.

Ron Dahl And since AI is not going away, it’s critical that youth start learning how they can harness the benefits and avoid the risks of AI in order to support positive outcomes for all young people.

Stephany Cartney We need to look at how policy and the role of government in making sure that we are still including people that are socially or economically left out of this, like, growth.

*****

Ron Dahl I’m Ron Dahl, founding director of the Center for the Developing Adolescent, and this is Adaptivity, where we explore the science of adolescence, untangling misconceptions about the years between childhood and adulthood.

We explore new insights into the rapid cognitive, emotional, and social changes that are happening during these years. And how the developing adolescent brain is primed to promote healthy and adaptive learning.

Concerns about new technologies, and their possible impacts on young people, predates social media. These emerged over television, radio, bicycles, and even, from Socrates, about writing. Yet the potential for AI to displace our current ways of learning and connecting does raise some urgent and compelling questions about how this emerging technology can be helpful and how it might create new vulnerabilities for youth.

This is the third episode of our annual three-part Adaptivity miniseries “Our Youth’s Perspective” where we talk to members of our Youth Scientific Council, a group of high school and college students who work with the Center for the Developing Adolescent to help us inform and communicate the science.

In this episode we hear from YSCA members Benjamin Olaniyi, Brynn Santos, Madison Cheungsomboune, and Stephany Cartney, who talk with our friend, University of Oregon professor of clinical psychology Nick Allen.

Nick directs the Center for Digital Mental Health at the University of Oregon. The YSCA members wanted to learn more about how AI is likely to shape education for young people, and why it can be something we embrace… instead of fear.

*****

Stephanie Cartney Hi, I’m Stephanie Cartney and I’m a fourth year studying political science at UCLA.

Brynn Santos Hello, I’m Brynn Santos. I’m a junior at Palisades Charter High School.

Benjamin Olaniyi Hi, everyone. My name is Benjamin Olaniyi. I’m a junior at King Drew Magnet High School.

Madison Cheungsomboune Hi, I’m Madison Cheungsomboune, and I’m a senior at Palisades Charter High School.

Stephanie Cartney In recent years, artificial intelligence systems have broken out and truly reimagined the world of technology.

Brynn Santos The rise of artificial intelligence like OpenAI, Google and their widely growing usage poses questions on the regulation and limitations of how AI can be implemented.

Benjamin Olaniyi Sparking debate between its benefits and its impact on critical thinking, social skills and mental health.

Madison Cheungsomboune Today, we will discuss the science behind AI’s effects on adolescents and users, as well as how institutional policy is shaped by its growing utilization.

Benjamin Olaniyi Here to provide us with professional background on this topic is University of Oregon professor doctor Nick Allen. Can you please tell us more about yourself and what you study?

Nick Allen Sure. Thank you everyone. It’s great to be with you today. I’m a clinical psychologist and I’ve been working in the area of youth mental health most of my career, so I’m particularly interested in mental health issues as they affect young people, sort of teenagers and young adults in particular. And I’ve done lots of different kinds of work in that area. But one area that I’ve been focusing on a lot recently has been the role of digital technology and the lives of young people, and trying to understand not only what the impact of those technologies are, but also how we could use them to support young people’s mental health better than we do currently.

Brynn Santos Within the past decade, society has witnessed a huge progression, especially in the development of successful artificial intelligence programs. For example, when you search something up on Google, the first thing that comes up is Google’s Gemini AI generated responses to your specific search. Even within my classes, my AP world History teacher began encouraging us to use AI to get feedback on our essays, generate practice tests, and implement it into our academic lives as a study tool. Despite this growing use of AI, there is also a growing amount of restrictions on its use in school settings. As technology and AI has been increasingly used amongst the masses, what role does technology usage have on young minds, critical thinking skills and mental engagement?

Nick Allen So, you know, a lot of people are talking about the impact that artificial intelligence might have on education and on work. And there are some things that it does very well. So for example, it can help organize information very well and feed it back to you. It also, if we do have equitable access, then it can make some things that perhaps in the past you might have needed to wait to talk to your professor or your teacher about, you know, maybe you can get access to something a little bit more quickly and in more of a personalized way. So these are benefits that a lot of people are thinking about with respect to artificial intelligence.

I guess the, the downside is that artificial intelligence can, by definition, can do the work for you. And so is there going to be a problem with learning that comes when you can get something else to do the work for you?

Now the internet has already started this trend. You know, people have been when they write in an essay, they can look up Wikipedia if they’ve got a multiple choice question. A lot of them, you can just search them on the internet and find the answer. I mean, most students know that. You can also buy full essays online or find them and download them.

So this ability to use digital technology to do the work for you, so to speak, has always been available. But with artificial intelligence it’s kind of turbocharged. So I think what we need to do in education is that we need to adjust the way we teach. And so I think that basically we need to use the classroom differently.

We need to use class time to actually do the things that we are evaluating students on. So for example, like getting them to do projects in class, getting them to do tests in class, things like that. And then maybe we should take some of the teaching where someone’s standing up the front and just talking and put that out of the classroom. So this is sometimes called a flipped classroom, right? The idea that you watch your lecture online, but then you come to classroom and classroom is interactive, and it’s about doing things together and being active.

And so what that does is it changes this relationship where what you do is you go home and you have to do this work, and you kind of go on the internet and whether you’re using AI or other resources, but you can take that process and bring it into the classroom. So it is actually a human-to-human process. You’re actually doing work, doing real thinking work in the classroom. And the other things where you’re receiving information more passively, you can put that outside. So that’s just one example of how we can use these technologies. But we need to adjust the way we teach.

I think the other thing is something that you mentioned, Brynn, which is that people teaching can actually integrate artificial intelligence into the assignments because the fact is, people of your age, you are going to be using these tools in your work life. There’s no question about that. So you need to start to learn about them and learn how to use them in an effective way. So we could actually be setting assignments that say, use AI to do this, but make sure that the assignment is structured in such a way that it requires the student to use the AI in a thoughtful and educated way, rather than just hit a button and print it out and then submit it, right? And there are ways of structuring things.

I think like that because, you know, that’s what a lot of us will be doing. You know, let’s just take computer programming, which is, you know, the one that everyone’s saying, like, oh, you know, these things are so good at programming. There won’t be any programmers in the future. Well, the fact is, I’m not a computer programmer, and I couldn’t manage an artificial intelligence doing computer programming because I just don’t know enough about it. So we’re going to still need computer programmers, but they won’t be spending so much time writing code. They’ll be spending more time looking at the output, checking it, making sure that it works properly, all that kind of thing, and managing the AI. And that’s probably going to be true for a lot of professions, that you may use the AI to get initial drafts of things, but you will use your human intelligence and contextual knowledge to make sure that that’s really fitting well for the job you want it to do.

So I think that we-we need to be careful. We need to be skeptical, but we also need to understand that these technologies are not going away and that they are going to be critical skills for people moving into the workforce of the future. So embracing them in education in a way that doesn’t diminish people’s ability to learn is a really important task.

Brynn Santos Yeah, that was really insightful. I had maybe a kind of follow up question to that. So do you think it is almost counterproductive for schools to be regulating it and restricting it when the rest of the world kind of almost doesn’t reflect that, and we’re actually increasing our usage of it.

Nick Allen I think that there should be guidelines, but I don’t think it should be like banned or anything like that. Education is meant to prepare you for life, right? Particularly for your working life. But not only that. And so if using digital technology generally and using artificial intelligence in particular, is going to be an important part of your work life, then that’s something that education should prepare you for.

But I think the reason people ban it or talk about banning it or say you can’t use it, is because of their concern about cheating. To put it bluntly, but that’s where I think that we can adjust our educational approaches so that we can use these tools in a way that actually facilitates learning. And you can still do the things that you want to do by getting students to do independent work and use their own brains and, you know, be able to evaluate that, and that just requires some thoughtful adjustment of how we do things. I think often when we want to ban things, it’s because we don’t want to change how we’re doing stuff [laughs] because that’s too hard. And so I don’t think that’s a particularly good reason.

Benjamin Olaniyi The persistent rejection of AI within the lives of adolescents may create hesitancy or unfamiliarity with AI tools in the future. We live in a world where AI is increasingly becoming incorporated. And after your conversation with Brynn, I feel like this raises the question on whether bans of AI are necessary among adolescents just for that same technology to be encouraged in adult life. In your opinion, how can AI be regulated to better align with the trajectory of its usage among adolescents?

Nick Allen So I think there has to be a balance struck here. The AI tools are quite powerful, and they could be used for cheating for some kinds of tasks. And what I mean by cheating, of course, is submitting some work that doesn’t represent their own work, it doesn’t represent their own cognitive effort. And the reason we care about that is because we want people to learn, right? And you learn by doing. What’s school for if not to prepare you for life, right? And these things are going to be part of life.

So I would say an approach where you integrate artificial intelligence into the educational process, teach people how to use it well and wisely, help people to understand what it’s good for and what it’s not so good for is, in fact, an important part of the educational approach. I would say, I would want to see AI integrated into the classroom, and I don’t think it’s that hard to work out ways to do that in a way that avoids what people are trying to avoid, which is this cheating, kind of plagiarizing thing. But we just need to be willing to adjust to the new reality.

Benjamin Olaniyi And thank you for responding. Well, I believe it can be expected that there may be pushback from a move to integrate AI in our classrooms. So how do you think we can destigmatize the use of AI, especially in education, so that people are more on board with AI being used in education?

Nick Allen Well, I think, you know, we have to have teachers and professors in universities who are willing to experiment and try new methods of teaching and show the success cases where we can demonstrate that not only has the student adequately learned what they need to for the course, so they’ve come out with the educational benefit, but they’ve had the extra benefit of now knowing how to integrate these powerful tools into that learning.

So I think we just need successful case studies. And for that, I think we need people, like I say, educators who are willing to experiment and try things and probably fail a couple of times but get it right eventually. And we also need for those beneficial examples to be shared so that people can not feel so scared. Because I think the underlying concern here is that the students won’t do any work and they won’t learn anything. And so if we can show that the students will work, maybe with artificial intelligence tools, they might be more engaged. You know, that’s really the best outcome all around.

Madison Cheungsomboune So having applied to college this past fall, I’ve seen most of the recent steps taken to address the topic of AI within applying to higher education, specifically, within one program with a focus on leadership, I responded to a prompt that addressed what my opinion was on the developments of AI and the implications of it. While at the same time, I’ve also been hearing about a possible future transition to essays being given less weight in the admissions process due to how easily a student can fabricate them using AI.

With this now, almost every adolescent and the future leaders of society are surrounded by this technology. Being a part of the first generation to have ample access to AI, how do you think it will influence how adolescents approach and respond to stressful situations, especially where they’ll have to take on a leading role.

Nick Allen So of course, there’s lots of different kinds of artificial intelligence tools, but the ones that we’re all talking about at the moment are what’s called generative AI. So these are things like ChatGPT, Anthropic’s Claude, Google’s Gemini, and the thing that makes these really remarkable, but also very challenging, is that they are able to generate material in response to a prompt or a question.

Now let’s take something like applying for college and writing your college admission essay. What these essays are for is to kind of understand more about you as a person, about your motivations, about your goals, and so forth. And the AI can’t do that that well. You have to be able to do two things. First of all, have to be able to really be good at asking the right question of the AI. The fancy word for that is prompt engineering, right? You know, the idea that you can write a good prompt and get the AI to give you what you want. And the second thing that you have to be good at is editing. Because you have to be able to take that output and turn it into something that’s not like this kind of boring, cut-and-paste sort of AI generated thing, but something that really reflects you as an individual and lets your, your interests and your strengths kind of shine through, which is kind of what you want to do in an essay.

So I know that there will be a tendency for someone who’s a bit stressed about writing an essay or anything else to kind of paste the question into the AI and hit return and copy and paste it into their essay. But I’d say they’re not going to be very good essays. And so you’ll still need these human skills of being able to ask the right questions, supply the right information, and then edit it well.

But what we won’t be doing so much of, is just like generating text on a blank page. And I don’t know about you, but I’m kind of okay with that because I think the hardest part of writing is getting that initial draft going. And when you have that initial draft and you’re more in editing mode, I find it much easier to think and to sort of arrange and so forth. So, so I think that if you’re talking about young people in the future and their roles in leadership and so forth, they will be using these different kinds of skills.

Let me give you an example, because I’m rather old and I can do it. So I remember the first time I saw a pocket calculator. Seriously. Like my uncle bought one around and I was mind blown that you could type any equation like, you know, arithmetic equation into this thing, and it would give you an answer in a second. And of course, at the time everyone said, oh, kids will never know how to do math anymore. They’ll be stupid. You know, this is terrible. It’s the end of the universe. And of course, we’re still pretty good at math and quantitative things. And we use pocket calculators as a tool.

And I think that that’s probably how it will be with artificial intelligence is that it will become a tool. Just like we don’t have to spend so much of our time thinking about navigation, because we’ve got Google Maps and things like that, and we perhaps don’t have to spend so much time adding up numbers, but nevertheless, we’ll be able to use those tools to then get on to the more advanced modes of thinking that we have to do about those other things.

Madison Cheungsomboune Yeah. Thank you. I really appreciate that response. And something just to follow up is that there’s, like, a common, like, saying that the kids now are like not going to be able to like, think for themselves in the future. And they’re, like, worried about, like, what we’ll do when we get older and things. And I know that your response kind of touched on this, but do you, could you like, maybe elaborate further on this response?

Nick Allen Yeah. So look, people have been saying that ‘the kids these days’ are losing their minds, and, you know, like they’ve been saying that since Socrates’ time. Seriously. You just have to study history, and you will find out that this has always been a complaint by the older generation. And it was said about the printing press and it was said about television, it was said about radio, it was said about anything. It was said about the locomotive. You know that people would, they wouldn’t be able to walk any more because they’ll just sit on these trains all the time.

So my point is, I’m deeply skeptical about statements that a new technology is going to make “the kids these days” unfit for work or citizenship or something like that. I think this is just a very old fashioned line that’s been run for a long time, and generally, well, always has been untrue.

I do not worry that people will be unemployed. I think that there will be just as much work for people, but I think the work will be different. And the happy path, and of course, I don’t know if the happy path is going to happen or something else, but the happy path is that the work is actually more interesting and less drudgery.

Stephanie Cartney Since the beginning of the pandemic, I and many other students have had to adjust to more technology usage in our classrooms. The pandemic was particularly difficult for many students, such as myself, who encountered digital inequity. In my case, my family didn’t have access to high speed internet, which hindered my ability to attend and succeed in my online classes.

Throughout my time at UCLA, I have watched more and more professors change their technology policies. Some professors have banned the usage of technology altogether, while others are integrating it into their curriculum, requiring it for academic success.

In your book, Adolescent Emotional Development and the Emergence of Depressive Disorders, you mentioned that adolescents between the ages of 11 to 14, there’s a rapid rise in the incidence of depressive disorders observed. Varied AI policies across classrooms and schools could influence social dynamics and create additional stress.

What considerations can be taken when developing AI policy to ensure that they don’t aggravate the emotional vulnerabilities of adolescents, particularly in adolescents of marginalized social and economic backgrounds?

Nick Allen Let’s start with the fact that artificial intelligence is changing the way people interact and learn and find information. And that’s definitely impacting education. But education was already pretty impacted by the internet. So the issue that you raised, Stephanie, is an extremely important one, which is the one of inequity in access to these products and services.

My own view is that it’s kind of irresistible that we’re going to use these technologies more and more in education and in the workplace. You could certainly put some regulation around it, but it’s hard to stop these kinds of big changes and developments in technology impacting things.

So I think therefore, rather than try to prevent people from using these technologies, what we really have to work on is making sure that access to them is equitable. Because if they’re going to be critical to education, to healthcare, to citizenship, to accessing government services, all these kinds of things. Then they really do need to be available to everybody.

And, you know, it’s part of recognizing that this is an equity issue means that we have to build the right infrastructure to make sure that people who are in marginalized communities, which can include people who don’t have as much economic security, but can also include people who live in remote or rural areas or other areas that are underserved in terms of information technology. We do need to see this as kind of like electricity or water or other sorts of services that we consider critical to people’s ability to take part in society.

Stephanie Cartney Yes. Thank you so much for your response to that question. I also agree that ensuring that we do have equity and creating more access to technology for students who don’t have access is very important. And I was wondering, what role do you see professors and teachers and social workers playing in this need for equity?

Nick Allen Well, I think there’s two aspects to it. One is the aspect of, as I mentioned before, infrastructure. So just do you have access to the internet particularly? And of course, more and more people need high speed internet. And then do they have access to the resources to use the tools effectively?

So just let’s take an example of ChatGPT. Now if you use ChatGPT there’s a free level. And there’s a level where you have to pay $20 a month, and there’s a level where you have to pay $200 a month, and you get different access and services depending on what level you’re at. Now, ChatGPT is a private company, so in a way they can charge what they want for their product. That’s kind of how it works.

But if using products like ChatGPT becomes critical to people’s, like I said, their education, their healthcare, their citizenship and these other critical factors, then we as a society have to think about, is it okay that kids who can pay $20 a month have better access than kids who can’t pay that money? And, you know, I guess we could all go and use Deep Seek or one of the free ones. But these are the questions that we have to wrestle with.

When a private company becomes the supplier of what’s emerging as critical infrastructure, critical access, that people need to be equitable with the other people that they’re working with. Then how do we make sure that that is equitable?

And it’s not an easy problem, but it does, in my view, usually require some kind of government involvement, because government, that’s kind of what government is meant to do. It’s meant to solve problems like that. It doesn’t always do it very well or very perfectly, but to my mind, that is the function of government is that it sort of sets up a situation where it says, okay, these people living in this area, they don’t have electricity or they don’t have water or they don’t have some other thing, you know, that’s not okay. Let’s try and fix that.

And I think that the same kind of logic needs to be applied to digital technologies generally, and artificial intelligence increasingly so, because these are technologies that are going to be critical for education and work and healthcare and these other critical things that we think of as things we want to have available to everybody.

Benjamin Olaniyi This has been so insightful. Thank you for joining us, Dr. Allen. We really appreciate it.

Nick Allen Well, thank you everyone. I really enjoyed the discussion and the questions were really terrific and insightful. I look forward to seeing how you all go forward with your education, and how you integrate artificial intelligence as a tool that helps you to achieve more and have a bigger impact.

*****

Benjamin Olaniyi Let’s take these final moments of our podcast to reflect. I personally found it very interesting that Dr. Allen wanted to change the perspective of AI. I loved the way he connected historical technological advancements to AI and found similarities in that, like when he mentioned the printing press or the internet. He sort of reassured the trajectory of AI because we’ve been through similar things in the past.

AI is just continuing the trend of society adapting to new technology, so it shouldn’t be seen as such a bad thing, or it shouldn’t be rejected or seen in as much of a negative light as it is currently. Yeah, I think that’s really important.

Brynn Santos And I think that’s all part of like the gradual evolution of technology and just human development as a society. And I think we need to accept change instead of fearing it. I particularly liked how he said, you know, much like book banning, the fear of AI is potential shouldn’t be the sole basis for its prohibition in schools or by the government.

You know, books are often banned because they present uncomfortable ideas. But I think that philosophy and that type of censorship stifles intellectual growth and the exploration of new ideas. I think that beyond just the fear of the unknown, AI could be a successful new frontier and has a lot of potential for positive change. And I think banning AI out of fear for its potential harm to especially kids in their development, might deny society the opportunity to really benefit from this accomplishment in technology, and it can really lead to actual cultural and intellectual growth and set higher standards for each new generation.

Madison Cheungsomboune Yeah, I definitely agree with what you’re saying. I think that right now, AI, especially what Dr. Allen, said that there is, like, such a large possibility to use AI to contribute to more growth, especially for adolescents and like an educational sense. And I think that going back to what he was saying, a point he made about how, um, just taking like one university or one professor to use it and find these successful cases, and then we can see really how much growth and the possibilities that we can use it in an educational setting.

Stephanie Cartney Yes, I think that’s so important to note as well. AI has so much potential to help us grow in our education and in our academic spheres. But I also really enjoyed that he included a little bit about how we can include people. And AI is going to be a part of our everyday lives now. It’s not going to change and it’s not really replacing our thinking. So we need to look at how policy and the role of government in making sure that we are still including people that are socially or economically left out of this, like growth. So AI can help us in our everyday lives, but it’s also important for us to, like, make sure that everyone is included as well.

Thank you for listening. This has been Stephanie.

Brynn Santos Brynn.

Benjamin Olaniyi Ben.

Madison Cheungsomboune And Maddie.

Stephanie Cartney This has been a special three-part podcast series “Our Youth’s Perspective.” Thank you for joining us.

*****

Ron Dahl Our increased tolerance for novelty and uncertainty in adolescence is one reason youth are often the first to embrace new technologies. AI represents new levels of uncertainty as well as possibility and vulnerabilities, as it transforms how we learn, work, and connect in ways we are just starting to imagine.

Fortunately, young people are already asking the right questions about not only avoiding risk but harnessing the potential of AI to benefit themselves and the wider world.

I’m Ron Dahl, and this has been a special episode of Adaptivity from the Center for the Developing Adolescent.

We’d like to thank Benjamin Olaniyi, Brynn Santos, Madison Cheungsomboune, and Stephany Cartney from the Center’s Youth Scientific Council on Adolescence for delving into this topic and sharing their reflections with us.

Thanks also to University of Oregon clinical psychology professor Nick Allen.

For more on the developmental science of adolescence and the YSCA, go to our website at developingadolescent.org.

If you’d like to learn more about the science of adolescence, visit us at adaptivitypodcast.org or share your thoughts through the contact information at our website.

Our podcast is produced at UC Berkeley for the Center for the Developing Adolescent. Our senior producer is Polly Stryker. Our producer is Meghan Lynch Forder. And our engineer is Rob Speight.

A special thanks to Xochitl Arlene Smola for her facilitation of the YSCA projects.

back to top