skip to content

Social Media and Adolescents, part 2: Helping Youth Thrive in a Technocentric World

January 17, 2025
Image for Adaptivity Podcast Episode: Social Media and Adolescents, part two: Helping Youth Thrive in a Technocentric World

Summary

Ron talks with researchers Nick Allen and Jacqueline Nesi about how developmental science can help us create social media and other technology that could not only decrease risks, but also amplify new opportunities created by the digital world.

Transcript

​Listen to this episode on iTunes, Spotify, or Libsyn.

Part 1 of this episode, featuring Ron’s conversation with college freshmen Dallas Tanner and Becker Chabaan, is also out now.

*****

Ron Dahl Please be advised this episode contains some discussion of suicide. If you or someone you know is in crisis, please call or text 9-8-8 for help. 9-8-8 is the number for the Suicide and Crisis Lifeline.

*****

Ron Dahl The young people we’ve spoken with have a lot of insight into the pitfalls of social media and of digital technology in general.

Max I definitely felt much more immersed in the museums that we were going to, and the people that we were talking to in DC, because the only sensory input I had was the real world and not a screen.

Ron Dahl But strict bans at the policy level may not be the solution.

Nick Allen If we’re, if we’re whipping phones away from kids and not giving them choices, then that’s not preparing them for the world that they’re going to.

Ron Dahl In fact, we can be thinking beyond just minimizing harm, and consider how technology could support positive development for youth.

Jacqueline Nesi Rather than just getting rid of the problems, how do we actually try to promote the the good stuff, the things that we know are working well for young people?

Ron Dahl I’m Ron Dahl, founding director of the Center for the Developing Adolescent, and this is Adaptivity, where we explore the science of adolescence, untangling misconceptions about the years between childhood and adulthood. We explore new insights into the rapid cognitive, emotional, and social changes that happen during these years. And how the developing adolescent brain is primed to promote healthy and adaptive learning.

*****

This is the second episode of our two-part series on social media.

In our first episode, we heard insights from college freshmen Dallas Tanner and Becker Chabaan about the positives and negatives of their own social media use, and the advice or restrictions they would recommend for their younger siblings and other peers to help them navigate social media safely and successfully. As developmental scientists, we listen to young people: they are often early adopters of technology, and they are experts on its impact on their lives.

In this episode, we’ll talk with two scientists about what we need to consider regarding how to make the online world safer, and also beneficial to young people.

But first let’s hear from Max, a 17-year-old in the San Francisco Bay Area. Like Becker and Dallas, Max had some time away from social media, which led him to choose to change his behavior.

Max I was going to Washington, DC, for a six-day school trip and no phones were allowed, and everybody started off dreading it and somebody actually smuggled in a phone at the airport. Our counselors had a bag with a lock on it, and we all had to pass around our phones and put it in the bag. And some people were saying, oh, I forgot my phone, I forgot my phone. And it took about half an hour to get 15 kids’ phones out of their hands. And it took a few days of adjusting to realize it’s okay if I’m not connected to all my friends at every second of the day.

I definitely had to, because of that, talk to a lot of people that I never really talked to, like people who are really into sports, and that would have never happened. If I had my phone, I would have just been in the corner of the room on my phone, you know, talking to the same ten people that I talked to on a daily basis. And I never would have branched out. I definitely felt much more immersed in the museums that we were going to, and the people that we were talking to in DC, because the only sensory input I had was the real world and not a screen. I got my cell phone back when we were on the bus, back to the airport and everybody else, yeah, everybody jumped on their phones to check on social media, and then that turned into people just scrolling, and I didn’t turn on my phone for the next few days. I turned it on once, and I saw so many missed text messages from my friends that I was like, this is not sustainable. And in that moment, I deleted Instagram. And it’s such a hard transition to make of deleting or stopping or heavily restricting social media, because I’ve tried so many times in the past to delete the social media.

And then I realize, oh well, my friends are on it, or oh well, there’s funny videos on it, and then I download it again and I don’t really know what the solution is, but for me it was a lot of compounding things, of having less time on my hands and more responsibility and school and outside of school and all of it kind of culminating in, I guess, a little bit of horror, like reflecting upon how much time I’ve spent on this and how I don’t remember most of it. And, uh, on my phone, I have no social media. And on my computer, it’s definitely more, okay, I’m going to spend an hour on my computer watching Netflix or watching YouTube or playing a game, and it’s a lot more deliberate than on my phone, where how it used to be is, oh, my friend sends me a funny video. I got the notification. Oh, I watch it and then oh, I’ll watch a few more. And then a few hours have gone by. But on my phone I don’t have any social media.

Ron Dahl Thanks so much to Max for being willing to talk with us.

Okay, now we get to hear the insights of a couple of developmental scientists who study kids and technology. Nick Allen is a professor of clinical psychology at the University of Oregon and the director of the Center for Digital Mental Health. Full disclosure, Nick has a digital mental health startup, supported by the University of Oregon establishment.

And Jacqueline Nesi is an assistant professor at Brown University where she studies how technology affects kids and how parents can help. She writes the weekly newsletter, TechnoSapiens.

A throughline to our conversation is this question: How can developmental science help us ensure that technology–from social media to artificial intelligence–is not only safe, but can actually help support better outcomes for young people?

First, here’s my conversation with Nick Allen.

*****

Ron Dahl I would love to hear some of your thoughts about how do we get above or transcend this overly simplistic ‘good versus evil’ or ‘how evil’ debate and begin to think about paths forward in some positive way.

Nick Allen Well thanks, Ron. It’s great to talk to you about these issues. And I agree they’re incredibly important. We have, you know, families want answers. Policymakers want answers. Kids themselves want answers. You know, there’s so many people who want answers about this. And I do agree that I think a lot of the conversation is sometimes not helpful.

So let’s start with some, what I consider some fundamental principles. And the first principle I would apply is that, um, all technology has risks and benefits. And this you can name any technology that we’ve ever invented as human beings. It will have risks and it will have benefits. And what we seek to do, therefore, is to understand those risks and benefits and then work out how we can maximize the benefits and minimize the risks. And in this regard, digital technology and social media and things like that are no different to any other kind of technology.

The second thing that I would say is that we need to understand how these technologies relate to young people in terms of their rights. And I think we often speak about this only in terms of what are called sometimes protective rights. The idea that young people have a right to be protected from things that are harmful. And that’s very important. But the other thing is that young people also have participatory rights. That is, they have the right to participate in things that are important to them, their development, and their future.

And so when we talk about restricting access to digital technology or allowing it, we need to think about what is the balance between those two two things and participatory rights are, I just want to focus on that for a second, because that relates to a whole bunch of things like, can you imagine someone at 18 or 25 entering the workplace who does not understand social media, does not understand digital technologies, do not understand artificial intelligence, or doesn’t have a first hand experience of it? That person is going to be disadvantaged in the economy of today, and even more so in the economy of the future. So those participatory rights are important as well. And there’s other aspects of participatory rights, which is the right to participate in your social milieu. You know, the actual, the right to connect with other people. Um, so, you know, I think that’s the second point that we need to bear in mind.

And then the third point that I’d make as a sort of a general point is that we need to be mindful of the fact that older people don’t like new things, right? And so there’s nothing easier in the world than getting old people worried about what the youth today are up to.

This is as old as Socrates and older, right? And you know, the thing is that therefore, whenever we are told there’s a new thing and it’s the ruination of this generation, we should be immediately skeptical. I’m not saying it’s wrong. I’m just saying we should approach that with skepticism, because there is so much cultural energy that will go towards that conclusion, just as it did for Elvis’s pelvis or, you know, TV or the printing press or the telegraph or margarine or whatever, you know, like you could go through any new thing that’s been invented and it’s been popular with young people. And you will if you look at history, you will see a similar cycle of panic. And I think if you start with those principles, then it helps you not to get to such a polarized place in the discussion about this issue.

Ron Dahl Yeah. Thank you. That’s a really, really helpful framework. Let me start with the third one, because I think it can be important to acknowledge the magnitude of the fear that, and concerns that so many people are having, the speed and intensity of change related to technology, the acceleration not only of something new, but the the magnitude of uncertainty of people that didn’t grow up with it. That’s number one. But the second part is we’re emotional. We’re an emotional species. And the narratives, the tragic, horrible things that happen to young people that social media seems to have contributed to in some ways, just lodge so deeply in the fears of parents in so many ways. And so if we don’t acknowledge the magnitude of this, then it’s hard to get past the emotions that are getting stirred up by the intensity of the fear and the uncertainty and the number of examples and stories, whether it’s the latest New Yorker article, whether it’s the way journalists cover examples, where it seems so compelling that social media had a role, if we don’t acknowledge harms that have happened, then it can be hard to move to how to balance the other two parts of this framework you introduced.

Nick Allen I 100 percent agree. I think we do need to acknowledge that it is frightening and once again, if you go back to point one, all technology has benefits and risks of potential harms. So I’m certainly not of the view that this technology can’t have harms. I think it’s clear that it can. But we need to understand that balance. And we also need to understand that there’s a cultural pull towards a panic about that. Not to say that it’s not understandable that it can’t be related to because it can, but we should be aware of that, especially when we’re talking about public policy and clinical interventions and other sorts of applications where we want things really to be evidence based. And so when, when you when you’re trying to judge that evidence base, you being aware that that there will be a lower threshold for many people, you know, when you’re told something that fits in with your preconceptions, you need much less evidence to support that than when you’re told something that pushes against your preconceptions. And just being aware that parents will have a set of preconceptions, that this is new, it’s scary and it’s dangerous. Which is separate from whether it really is. But it’s just that that preconception will be there. So the threshold of evidence of being convinced will be different.

Ron Dahl So one example, and I think you’ve used this in some of your writing and I think other people as well, if you think of a comparable technology–automobiles, which is measurably one of the greatest sources of death and disability in teenagers. We don’t hear calls to eliminate all cars. And in fact, there’s a deeper issue, I think, here is that if young people don’t learn how to drive, it’s like your example of someone not understanding technology or social media getting a job as a young adult. There’s not just a right to access and have, you know, the available ability to use these social media. It’s learning and developing skills and mastery that are important to thrive in a complex, rapidly changing world and so how to protect young people but not interfere with their ability.

And again, there’s great examples. The graduated licence program is probably the most famous. It’s not that you wait until they’re 30 to learn how to drive so that they, you know, are going to be good drivers because they won’t be. They’re learning these complex skills early. And, you know, I think you and others have made this point that we might be better served helping them develop the skills and capacities rather than simply try to protect. The rush to protect too strongly could actually undermine adaptive learning and skills.

Nick Allen Yes, I agree, and in fact, in some ways, driving in automobiles is a great example. And in some ways it’s a really terrible one because in fact, automobiles are much more dangerous than digital technology under any estimate. As you said, there are much more death and disabilities caused by motor car accidents than even the most catastrophic interpretation of social media. So, you know, given the wide usage of a product like social media and given the current evidence base, I would say that it’s actually, compared to a lot of technologies, relatively safe.

But let’s talk about the automobile example. So yeah. So that’s a great example of how we adopt new technologies. We find that there’s some benefit to them. We find there’s some risks. We study the risks and the benefits. We understand them deeply or as deeply as we can. And then we try to bring in three components to how we integrate that technology into our society. And I think of them as three: education, regulation, and design.

So we set up systems for educating people about the use of the technology, the harms and benefits. We put appropriate regulations on those technologies, which is going to differ. And then of course, we design the products better over time so that they’re safer and that we maximize the benefits and minimize the risks. And that certainly happened. All of those are relevant to the automobile example.

Now we are still, you know, really in the early days of understanding those processes with social media and other forms of digital technology, and going back to a point you made before Ron, this is an area that’s moving very quickly. I would say that, in fact, what’s going to happen with artificial intelligence and its impact on developing adolescence is going to be much more rapid even than what happened with social media. So we do need to understand how we as scientists and clinicians, can be of use in a process that is moving this rapidly.

Ron Dahl Given your expertise, and how much you’ve been involved with or are deeply aware of the literature in terms of the relationships between mental health and social media and particular platforms, where would you suggest are the targets for positive impact? In other words, if we were going to use these different targets to improve or protect in particular ways where the signal or the evidence base of harm would most strongly suggest that something needs to be modified or improved–can you talk a bit about your understanding of what might be the most promising areas to focus on or targets?

Nick Allen Um, I do think that there’s potential for designing digital products so that they are health promoting, and I do think that the extent of usage is one aspect of that. So there are times where the use of digital products can interfere with healthy sleep, for example. And I think that that’s one of the probably more compelling examples of where digital devices can disrupt a healthy pattern of behavior. I actually think the evidence for that is better than it is for interrupting physical activity. And so, for example, building what you might call circadian intelligence into devices is something that I think would be a good design feature. You know, to actually bring expertise in sleep and behavioral rhythms to the design of digital products and digital devices would be a smart thing to do. You know, of course, people have been displacing their sleep with other things. That’s nothing new about that. You know, kids have been staying up reading their books under the covers or talking to their friends on the phone or even doing homework. Displacing sleep with homework. So it’s not, once again, a thing that’s unique to these products.

But I do think there’s an opportunity for design with these products that you don’t have with something like a novel, where you can actually build in features that will encourage people to put the device down at times where it could be displacing healthy sleep. So that’s an example of a design consideration that could be brought in to minimize harms.

Ron Dahl I want to loop back to your example about the sleep and circadian design and the evidence base that displacing sleep is one of the concerns. And introduce this theme that we know is so important from the developmental science of adolescence, which is giving young people autonomy rather than imposing these restrictions.

And I was struck by the the most recent survey that Hopelab did with Common Sense Media that the number of examples from their data that young people do want to limit their use of social media, they do want they do see the value of finding a balance between the more harmful or concerning parts and the things that they really like and enjoy. And so this again, this is an area you know a lot about as well. How do we support their autonomy? Like how do we design or do regulations that encourage them to set limits about what is healthy for their goals, including getting enough sleep, for example? As opposed to some of the approaches that go against that, trying to take away their access or impose upon them, you know, harsh, hard and fast rules of what they can’t do or should, should not be able to do. I’d love to hear your thoughts in that space for your design approach.

Nick Allen I think an important point that you started with, that young people already have a strong motivation to understand the difference between usage of their digital products that is wellbeing enhancing and the and the usage that makes them feel less good. So that’s a good starting point right. Because there’s motivation there. And that’s where you know, we’ve been very interested in uh, the literature on behavioral nudging, where the idea is that rather than creating rules and, and restrictions, you if you’ve got a situation like the one you just described where there’s already a propensity for the person to want to achieve a certain goal, then often it’s a question of just reminding them of that goal, creating awareness of that goal so that they can make a behavioral decision at the right time. And of course, devices are perfect for that. You know, they’re with you all the time. They’re aware of context. They’re aware of what you’re doing. They know what time of day it is. They know often where you are. They know what you’re doing on the phone and so forth. And so you can actually create these nudges that remind people of their goals.

And I do think, as you say, we know that one of the key developmental tasks of adolescence is to increase independence and autonomy over the period of adolescence so that someone can actually enter the adult world ready to participate. And this is just another example of that. If we’re whipping phones away from kids and not giving them choices, then that’s not preparing them for the world that they’re going to enter.

And we always talk about this concept of scaffolding. You’ve used that concept a lot. I know we’ve talked about it many times. And so the idea of scaffolding the, the kind of the, the structure and the and the nudges and the supports that you give for making healthy decisions and importantly, creating awareness of the healthy decision. So the awareness of, you know, I’m doomscrolling right now, it’s not making me feel good. Creating an awareness of that creates a possibility for a person to make a different decision about how they use the product. And that’s a question of design primarily, you know, somewhat of education, you know, in terms of the priorities there, you would say design 1, probably education 2 and regulation number 3 for that kind of thing.

Ron Dahl So thank you. That was really helpful. I’d like to have you think out loud about another framework that I think is not sufficiently addressed in an explicit way. When we talk about social media and platforms and digital technology communication. And that is at one end of the spectrum, when young people are using these technologies to connect, to communicate, to create, to contribute. And the second is when they are passively being pulled to just watch and observe or be exposed to material and everything in between. And sometimes when we use these terms screen time or social media, we don’t differentiate across these different categories of ways in which young people are using that ubiquitous access in their pocket. How do you think about these categories, and particularly in ways that could inform research and measurement of, you know, areas of potential harm and areas of affordances for, for positive development?

Nick Allen Yeah. So there is some research literature that suggests that there may be a stronger relationship between passive use that is, you know, not contributing, not creating, but just observing and negative outcomes than there is between positive use. And in fact, active use is often associated with positive outcomes. The research literature on that is still fairly nascent. And it’s largely because of a measurement problem that we’ve had that in fact, still an enormous amount of our understanding of these processes comes from self-report questionnaires, and people are not good at reporting on these kinds of behaviors with accuracy. And in particular, you know, when you try to break it down to things like passive versus active use, you know, self-report techniques are not helpful.

So this is one of the reasons why we’ve been very interested, and we’ve invested a lot in in methods that allow us to objectively measure patterns of phone usage and that allow us to look at things like which how often is the device being used, which apps are being used, how much of that time is more active, you know, contributing to do with posting and typing and things like that versus time that is more passive.

The irony is, of course, that these data are all being digitized already. We didn’t, you know, as researchers, we didn’t have to ask for that to happen. That’s already happening. What we do have to do is find ways to access it.

And, and so I do think that limitation to the research literature is one of the things that’s prevented us from having a deeper understanding, because you’re exactly right. Screen time is this polyglot concept that’s way too broad to be of any use. And then we also measure it with self-report, where we know that there’s a lot of error. I’m not saying it’s meaningless. Just to be clear, self-report data is valuable, but we know that it’s not very precise.

And so when you put those two things together, you know, you’ve got this polyglot concept where you’ve got online, you could be having so many different, varied experiences. You could be falling in love, you could be finding useful information, you could be being entertained, you could be being bullied. You could be going down a rabbit hole, et cetera. You know, like, there’s so many different things that can be happening online. And then to treat them all as one thing, which is largely based on what some people call a “displacement hypothesis,” the idea that being online is stopping you from doing good stuff.

And of course, most young people will tell you that being online is a really good thing. It’s really fun, I feel connected, I feel entertained, I feel informed when I’m online. And there’s also some bad stuff, like if I experience bullying or if I experience fear of missing out, or if I feel that I can’t put the phone down because I’m my, you know, when I want to otherwise, because I might. So there’s a mixture of experiences and our, our understanding of that mixture and, and is very, um, poor at the moment compared to what we know about self-reported screen time.

Ron Dahl These are such important issues that you’re outlining in terms of how to get insights into patterns of usage that may be more problematic. And I know your interest in this area has caused you to develop a whole new approach and technologies to be able to access information in these more complex, real world, moment-by-moment ways, by actually looking at their behaviors, looking at their actions, looking at and in some cases, even their responses. And are you beginning to get some new insights by actually looking at that, that granularity and complexity of behavior, of how young people are actually engaging with their phones moment to moment.

Nick Allen Yeah, this has been, as you know, a big focus of my work over the last decade. The thing that kind of really hooked me into this was this realization that we’re collecting the largest data set on human behavior that we’ve ever collected in history through these consumer devices. We’re collecting the largest, you know, through the accelerometers and the GPS sensors and all the different sensors that sit in our smartphones and wearables and things like that. Now, if you’re a psychologist like me, this is mind blowing. You know, like objective real time behavior collected at massive scale. But the problem, Ron, is that we can’t get hold of it. It’s all been socked away in little pockets in the internet and largely been used to target you with advertising.

But, you know, imagine if we could bring that same data set to bear on things like health and well=being or education. I mean, it’s an opportunity that’s historically unique. So anyway, that’s what gets me excited about this. Does it raise enormous ethical issues? Absolutely. They need to be thought through very carefully. But you know, in psychology we’ve been trying to describe and understand and predict human behavior for centuries. And we’re not very good at it. And now we’ve got this opportunity to really do a much better job. And it’s very exciting.

Ron Dahl It is exciting. And I want to ask you more about this, because the ability to use this complex, real-time information across such large samples of young people on their lives for the purpose of good, for the purpose of helping to inform ways to promote mental health as a way to help promote their health behaviors in general, is at the heart of of what you’re describing. And of course, all the issues of privacy and protections loom enormously. And increasingly, some of the researchers–researchers funded by the National Institutes of Health–who are intently focusing on understanding to improve health, are finding value in this kind of approach and the ability to use this kind of data. Can you talk a little bit about the kinds of ways that may be able to be helpful, or appears already to be encouraging in terms of insights and approaches that could be positive contributions to health?

Nick Allen Yes. So one of the examples that I’ve worked on quite a bit is understanding mood disorders and including suicide risk. Now suicide is a very interesting topic for this kind of approach because for many people, the best interventions to prevent suicide are ones that are present at the time of highest risk.

So, for example, one of the most successful interventions in suicide is to make sure that a person doesn’t have access to tools that they could use to to harm themselves. If you create some friction in that access, then it can significantly reduce the likelihood of a person taking, engaging in a suicidal act. So the point is, this is a classic example of a situation where if you can be there at the right time and in the right context to provide the right kind of support, then, you know, all our data suggests that that could be incredibly powerful. And so the fact that the phone is ubiquitous, as we said before, really creates an intriguing possibility. So we have been trying to learn what are some of the behavioral patterns that, um, that might give us a signal that someone is at greater risk.

So we’ve looked at things like the language that people type into the phone, and we’ve found that there are certain patterns of language that are quite strongly reflective of their mood state. If someone is at at risk for for suicidal thoughts and behaviors, then their mood shifting negative is going to be a very powerful risk factor for that becoming more likely. We’ve looked at some other variables to do with geographic mobility. So for example, if you’re spending more time at home than you normally would–so this is not saying that spending time at home is bad, this is about a change within a person across time. Then, then that can be a sign of increased risk for both depression and for suicidal behaviors.

Of course we’ve also looked at aspects of sleep as measured by the phone sleep patterns that can be picked up just from the way the person uses their phone naturalistically. So not requiring a wearable or any special thing like that. Once again, sleep variability. We’ve seen some evidence that higher variability in sleep can be associated with those risks. So the point is that all of those kind of signals, which are collected by the phone in person’s naturalistic use, can give you some contextual information about what’s called a just in time intervention and intervention that’s delivered at the right time to support them in the right way, to deal with the situation that they’re dealing with.

Is it 100% precise? Absolutely not. You know, it’s still a probabilistic endeavor. But what we would argue is that it’s a lot better than what we do currently in clinical practice, which is we talk to people during our appointments and then we send them off, hopefully with enough information to look after themselves until the next appointment. But we can’t be there in real time in the way that the phone can be. And that’s what creates a really unique opportunity.

Now I will emphasize that I’ve used a fairly extreme example in the case of suicide. But these the same kinds of logic can be used to support people in all sorts of healthy behavior change, whether it be,reducing substance use, increasing sleep health, increasing physical activity, increasing social connection, increasing engagement in life. You know, there’s a whole bunch of positive behavioral patterns that you can track and understand, and therefore design interventions that have this quality of being ubiquitous and ongoing, um, supporting the person at the right time, in the right way.

Ron Dahl So, um, what you’re describing, Nick, it sounds like it’s part of studies or clinical studies where the young person and their family have given consent to be able to explore these as potentially helpful ways to learn more about these risk factors and what can be done. Is that true?

Nick Allen Absolutely. And so obviously all these kinds of studies are regulated by human ethics committees. And we go out of our way to make it clear to young people and to their parents exactly what data we’re collecting. Why, what we will do with it, and very importantly, what we will not do with it. And so, you know, obviously we don’t sell it, we don’t use it for behavioral advertising or anything like that. You know, this is research data. Now, you know, the challenge becomes as, as you know, we’ve–I also have a startup company whose main mission is to take these methods and put them in the hands of clinicians and their patients to improve health care, behavioral health care.

Ron Dahl So it’s really helpful to think about how these kinds of tools and innovations could be used. However, as important as that is, the sort of elephant in the room of, well, the elephant in the journalistic world that surrounds us is these concerns that in those vulnerable moments when it’s most important to provide positive support, help, and protection, there’s concern that some young people get influenced by algorithms that are tapping into their patterns of behavior that may present harmful information targeting them. And some of the most emotionally disturbing examples focus on this and part of the feeling of almost evil about these risks, I think, comes out of examples. Can you talk about what we know about that and what could be done to improve that level of risk in this population?

Nick Allen Yes. I mean, that’s exactly right. As once again, going back to the original comment, benefits and risks with all technologies. And in this domain that’s true as well. So the same device that could be a source of awareness and support and connecting you to helpful positive information can also connect you to negative, unhelpful information. That’s going to make the situation worse. And we know that.

Once again there are design solutions–I shouldn’t call them solutions because of course they’re never 100% effective. But you can take the product and design it in a way that brings people more of the positive information.

Now, to take a very simple example of that, you know, Google has changed their search engine so that if someone types in queries about certain material, the very first thing that they will be presented with is health-related information about that query. And this is particularly true with respect to suicide. So, you know, does it work perfectly? Of course not. But it’s taking the design principles and pushing it towards that more helpful, more positive use of the technology.

So I want to be clear to anyone listening that if there’s been a young person in your life who has experienced harms of that have resulted in the context of using digital technology that I am not wanting to minimize that reality at all. That can happen, but I also think we need to be mindful of the fact that those things can happen elsewhere, too.

So, for example, let’s take the example of bullying. I mean, before digital technology, kids were bullied at school. You know, uh, this has been a common experience that people have talked about for hundreds of years. You can go back to Charles Dickens, right? And so I’m not wanting to minimize the harmful impact of that experience, but to understand that it’s a human experience, that it can occur in lots of contexts.

And what we do is we look at each context and we say, how can we minimize the risks and maximize the benefits? We don’t say, let’s get rid of schools because kids get bullied at school. We say, how can we have better policies and practices in schools to minimize those harms? And over time, hopefully we do better. And I think that we need to apply the same logic to digital spaces because like it or not, they’re not going away. So what we need to do as a society is, is take the risks seriously, but also understand the benefits and make sure that we’re getting we’re designing and educating towards those outcomes.

Ron Dahl Thank you. That was very helpful. How do we bring evidence-based and, and developmental insights into these well-intentioned efforts, hopefully well-intentioned efforts to create more controls and, and safeguards and guardrails? And what do you think of the current ones?

Nick Allen Yeah. So I think that I think we’re starting to see these issues taken more seriously and built into product design during this time. And that’s an exciting opportunity because I think in a way, you have a couple of different approaches to this concern about the safety of these products. There’s what I might call the hand-wringing concern, which just says, can’t we just get rid of it? Wouldn’t it be better if we didn’t have them and so forth? And that’s not only not realistic, but it also cuts off all the potential benefits that products like this have. So if we’re not going to do that, then surely we do want the companies who produce these products to be engaging in safer design.

I think there’s, perhaps understandably, some skepticism about it where people are saying, well, look, you know, these companies are only doing it because they’re being dragged kicking and screaming by all the social concern about it. And, you know, there may be an element of truth to that, but I also think it’s important that we don’t criticize them for doing good things when they do them, for whatever reason.

I mean, one could argue that the automobile industry was dragged kicking and screaming to bring in safety standards and emission standards and things like that into their products. But they did it, and it’s been beneficial.

And so I think when a company like Instagram, for example, starts to build in features into their products, that gives parents more control, that gives kids more control, that, you know, that understands that there’s a developmental context to the use of their product. Of course, their initial efforts won’t be perfect. No efforts will be perfect. But I do think we shouldn’t criticize them for moving in the right direction. Right? We should actually celebrate it. We should say that’s great. Now let’s go further. We’re here to help you and support you and inform you, you know, as, as scientists and clinicians and educators and policy people, to actually now that you’ve stepped your foot on this path, let’s keep going and let’s make these products really great so that you can get all the delight and benefit and all the good stuff, and we can minimize the risks. If we can work with some of these large companies to move their products towards better safety, better design, you’re really playing the main game, then.

Ron Dahl Nick, thank you so much. This has been such a stimulating discussion. It’s raised so many compelling issues that are important for us to be grappling with. I really appreciate your insights and wisdom.

Nick Allen Thanks for having me.

Ron Dahl That was my conversation with Nick Allen, professor of clinical psychology at the University of Oregon and the director of the Center for Digital Mental Health.

*****

Ron Dahl I’ve enjoyed Jacqueline Nesi’s wonderful newsletter, TechnoSapiens, for a while now. It’s about living and parenting in the digital age. Jackie is a faculty member at Brown University where she studies how technology affects kids and how parents can help. We spoke about her thoughts on social media and product design. Here’s our conversation:

*****

Ron Dahl I think a lot of what we have been discussing as a team is how to capture some of the nuanced perspectives on these complex issues that are, you know, contributing to, you know, what many people are calling a moral panic, and so it is such a pleasure to have you.

Jacqueline Nesi Thanks for having me. It’s really great to be here. And, you know, I’ve been following your work, of course, for a long time. It’s really an honor to be here in this conversation.

Ron Dahl So, if you had a team of people trying to partner to improve design, what are the principles that you would want to highlight from what you’ve studied, what you have insight into that might help guide a more collaborative improvement of design?

Jacqueline Nesi Great question. So the first thing that comes to mind, and this has started to be written into, I think some of the age appropriate design codes and things like that that we’ve seen, but in general prioritizing the best interests of a child. So prioritizing like if a company is operating when they’re designing their products, if they’re operating with the mandate that they consider the best interests of a child in designing their products, I think that that in itself creates a lot of different decisions than if the mandate is something else. And so that’s probably the first thing that I would start with.

A second principle that I guess I would, I would consider is, the idea of what we know from the science that kids are impacted so differently by these technologies, and there’s so many different ways that they’re using them, and that it’s affecting them is really thinking about customizing the experience on these platforms. So giving parents and kids or young people more opportunities to customize what their experience looks like, whether that’s, you know, having the option to opt out of certain types of content on their feeds, having more input into how the recommendations even work for them if there are recommendations at all. Having more say in what it looks like when they first open up the app.

In one of my studies, we were talking to a number of teenagers, and one of them made this very basic but interesting point recently talking about Snapchat. And they said, “Yeah, you know, every time I open Snapchat, the first thing that pops up is a camera pointed at my face.” And they said, “That just really makes me feel like every time I open up, I have to check my appearance, I have to look at my makeup, I have to fix my hair. I have to make sure that I’m, you know, that I’m looking okay.” And this is a kid who’s opening that app, you know, dozens, if not over a 100 times a day to send messages. And it’s just a very basic thing that I think, you know, we forget that the default design of these apps is not necessarily working for every kid. And if there was a way, even for that young person to change that so that it wasn’t a camera pointing towards their face every time they opened up the app, changing that experience to be something that works better for them and their well-being, that I think would go a long way as well.

Ron Dahl Yeah, what a great example. And that also points to a sort of tip of the iceberg of related issues of self-consciousness and feedback about appearance that sort of leads to another set of questions, not only about tech company design, but also regulation, like, where are the flashing yellow lights? There are risk areas where the data, if not causal, makes more people nervous. And I think a lot of this focuses on girls in particular and the transition into adolescence, where the sensitivity to feedback about appearance and of course, life in general has been doing this. And I’m obviously tech is a key part of it, but more generally. Thoughts?

Jacqueline Nesi I certainly think that the concerns about appearance and feedback on appearance, that’s certainly one of the major yellow flags, I would say, when it comes to social media. I think there’s maybe two broad categories of kind of risk. I would say when I think about just broadly the risks of social media, I think one is based on time spent and is it getting in the way of other things that are really important for adolescents mental health. So I think that one risk is just overuse.

The other risk, I think, has to do more with content and what they’re being exposed to and being exposed to problematic content. And some of that may be sort of appearance focused, beauty focused, but it can also be things that are, you know, really scary or just not age appropriate or toxic or, you know, like there’s so many different ways, I think, that the content can be problematic, I think. So that’s another, I guess, another area that I would point to.

Ron Dahl If that’s one of the flashing yellow lights in that area you just described, what might be the design or, or policy that might target a better balance?

Jacqueline Nesi I think different design decisions likely affect kids in different ways. For example, one feature that’s been suggested as something that should be changed in these platforms is notifications. So we know that notifications, they go off on a kid’s device and then they feel the pull to visit the platform. And for a lot of kids, I think that is very problematic. And having no notifications would be a much better situation.

I think for other kids, if they’re not getting notifications, then they’re repeatedly checking the app over and over again to see if something new has happened. And so that actually would be a worse outcome for them. So that may be a way of avoiding the question a little bit, but I do think that it’s very hard to think about these types of design changes, sort of in isolation, because I think they could affect different kids in different ways.

Ron Dahl Well, let me just zoom in on this a little more, because I think your example was so powerful of, like, the image of the camera, because now there are several studies and some of them imaging, some of them just simpler measures of physiological arousal that just the message you’re being observed or evaluated when you’re in this period as puberty is just starting, it just turns up the neural systems that are involved and the sensitivity to being evaluated. It’s true for all of us at all points in the lifespan, but it’s really amplified. But given your knowledge of how young people are using these, what might help shift the balance away from being so focused on yourself being evaluated to engaging with people to connecting with them?

Jacqueline Nesi Definitely. Yeah. And I think that’s really the thing to be thinking about, is I think that we know in general that when these platforms facilitate connection, when they facilitate communication and community building and real socializing, that that’s when I think they’re at their best. And when they are doing things like increasing self-focus and self-consciousness, that’s obviously not a good thing. I think that metrics play a big role in this. Like, counting the number of likes or views or shares or comments–anything where there are followers. There’s a lot of numbers. The platforms are very sort of quantified in that way. And I think that there are a lot of cases where not having those quantified metrics would probably make a big difference.

I also think that there’s a lot to be said for the way that recommendations work on a lot of these platforms, the way the algorithms work. Of course, they’re really complicated. But in general, a lot of times these algorithms are prioritizing engagement the longer that users spend watching a video, or the more times they share it, the more likely it is to show up on a person’s feed, for example. But there are other indicators that could be used over engagement to decide to play a bigger role in sort of what gets shown to young people. And so that’s something I think about too.

Ron Dahl All really, really good points. Let me just pick up on the thread, um, that you mentioned about engagement, and that there’s a shortcut to engagement, which is just holding eyeballs onto content and changing it rapidly enough, or changing its features rapidly enough that it keeps pulling attention.

But what if we have more evidence that supports what you described? That if young people are using technology to connect, to create, to contribute, to be part of collective action in a positive way, in ways that are the opposite of just focusing on oneself or being socially anxious about the feedback. Then there’s some partnership there. I mean, if the engagement isn’t just sitting and looking at content that has commercial value but is engaging in positive activity, then maybe there’s some opportunities. I think sometimes we get so focused on destroying the problem, rather than shifting to healthier versions of what young people want to be doing. We’re trying to fight upstream, and there’d be people that would scream that you would even imagine technology could be, you know, a positive force. But if it’s a way for young people to connect and contribute and create things. I mean, how might we try to think in that space?

Jacqueline Nesi Yeah, I love that. I think that that’s a really nice reframe of thinking about rather than how do we get rid of the problems, which of course, we should be doing, too, But rather than just getting rid of the problems, how do we actually try to promote the the good stuff, the things that we know are working well for young people? I think that in some ways, I think that comes back to this idea of thinking about the best interests of the child and designing these platforms. I think, if you’re solving for eyeballs you come to a very different product than if you’re solving for connection, creativity, community engagement. I have no doubt that these platforms could figure out ways to, to promote those things, the things that are good about them. But I think that it probably requires a pretty significant redesign and maybe a different business model as well.

Ron Dahl You know, the other thing this connects to is enlarging our framework. You know, a lot of the moral panic about technology and youth comes from the wealthiest countries. And of course, 90 percent of the young people in the world are growing up in low- and middle-income countries and there, we don’t have the kind of data we need, but there tends to be a stronger signal about positive associations, about young people who are using technology to connect, to get information, to learn, to connect with people like them. And so If the scaffolding is there to make it more likely that it leads to positive. And so how do we make progress increasing that opportunity frame and think of these as tools that are often abused and problematic. But how do we help shift some of the frame in that space?

Jacqueline Nesi Yeah. Oh gosh, it’s such a good question. I don’t know if I have a good answer on that one. There are these very real benefits that we see for some kids. You know, learning and sort of discovery and having access to all this information, being able to explore identity and sort of join a community, I think that’s particularly important for youth who might feel marginalized in different ways in their offline lives. I think all of those benefits are so important. And I think that right now, the balance of those benefits doesn’t necessarily outweigh the risks. Like, I think that’s something we really need to be thinking about.

Like, if we were really going to design a product from scratch to confer those benefits, we would not come up with the platforms that we have right now.You know, we would come up with something very different. And so I do think that there is something to be said for thinking about how do we maintain those benefits, how do we maybe amplify those benefits while trying to minimize the risk, knowing we can never fully minimize the risk, but minimize them as much as possible?

Ron Dahl So that also introduces another big, big framework that I’d love to hear your thoughts about. And that’s inequities. And in many ways, you know, one of the concerns about technology is that it is more likely to create opportunities for the most advantaged kids and more likely to create more risks and exploitive dangers for the least advantaged. And yet, it’s easy to imagine how that could be flipped, how it could create better equity to information, to connection, to ways to contribute. And I think here again, some of that frame is becoming relevant in the global setting because young people who become savvy with technology have opportunities in those settings. But it’s broader than that. It comes back to design and policy. How do we intentionally think about equity issues with technology not worsening inequities, but actually helping to address them?

Jacqueline Nesi Yeah, I’ll just, I’ll mention one of the interesting things that I think we’ve seen across a couple of different studies. In particular, I’m thinking about the work that I did with Common Sense Media, is that the at least in the US, in the work we’ve done here, that kids who in general, the kids who are struggling in different ways in their offline lives, whether that’s because they are, you know, marginalized in some way, LGBTQ teens who may feel marginalized in their sort of real-world communities, kids who maybe are struggling with their mental health are ready or maybe socially are struggling. Those kids tend to see actually more risks online. They tend to report more of the challenges when it comes to social media, but they also tend to report more benefits. So they also say that they’re benefiting more from the connections they’re building online.

And so I think that creates a really interesting challenge because you don’t want to take away what’s working for these kids. But we also have to recognize that they’re also running into more of the risks. So how do we address that? And I do think like you said, this is a situation where it can actually contribute to greater inequities. It can contribute to greater challenges for kids who are already struggling if we’re not careful about how these technologies are used and designed.

I do think that so many people in these conversations do have the same goal in mind. I think that we all want to do the best thing for young people. That includes in almost all cases, that includes all of the academics in this debate. It includes the policymakers, it includes the tech companies in a lot of cases. But in order I think to do that, I think we need to really be having these conversations, talking to each other rather than past each other, and that’s not always happening.

Ron Dahl Thank you so much Jackie.

Jackie: Alright, thank you so much, thanks for having me!

*****

Ron Dahl That was my conversation with Jacqueline Nesi, a faculty member at Brown University. Her newsletter about living and parenting in the digital age is called TechnoSapiens.

Thanks also to Professor Nick Allen.

And thanks to Max for talking with us about how he took control of his social media usage.

We know digital technology plays an ever-increasing role in almost all aspects of our daily lives. This creates exciting opportunities—especially for children and adolescents who are growing up fluent in these new ways of using tech. Yet, this early immersion in using tech is also creating concerns, not only about social media’s impact on youth mental health. But also about disruptions in the way young people interact, learn who they are and form relationships. We do not really understand how this may be altering some of these formative aspects of social learning.

Moreover, the rate of change appears to be accelerating. Exploding development of AI is just starting to exert its influence.

Taken together these pose great challenges. There are unprecedented opportunities along with unprecedented risks and potential dangers. Uncertainties about how to support the benefits and opportunities, and how to limit the risks and dangers.

As developmental scientists, we know that engaging adolescents will be crucial to meeting these challenges. Adolescents understand many of these issues better than adults. And they want to contribute, they want to gain respect and admiration by contributing to solving grand challenges.

The sheer complexity of these challenges–and accelerating pace of change–demands new approaches. Because we need regulations and platform designs that not only promote healthy social learning and development, but also help prepare youth for a successful adulthood.

An adulthood that is likely to require skills and wisdom for navigating what is likely to be a technologically complex world. This is unlikely to be achieved simply by banning technology, or insulating young people from access to tech. There could be dangers in failing to take action. Yet there can also be dangers in leaping too quickly to solutions rooted in fear.

These are compelling issues that deserve a high level of priority and careful consideration.

Solutions may require fundamentally new kinds of partnerships with researchers, clinicians, and policymakers, but also with tech companies, as well as with young people themselves.

This is one of the great challenges of our time. One worthy of bringing together and integrating all relevant expertise at multiple levels to find a path forward that will help young people thrive in a rapidly evolving and increasingly techno-centric world.

*****

If you’d like to learn more about the science of adolescence, visit us at adaptivitypodcast.org or share your thoughts through the contact information at our website or by using the hashtag - adaptivitypod.

Our podcast is produced at UC Berkeley for the UCLA Center for the Developing Adolescent. Our senior producer is Polly Stryker. Our producer is Meghan Lynch Forder. Our engineer is Rob Speight. I’m Ron Dahl. Thank you for listening.

back to top