Progress or Peril: Wrestling with My AI Anxieties
Seeking a balance between benefit and risk in AI adoption
As I strive to build AI habits in my daily work, I find myself sidetracked by posts on social media and comments heard in my communities. Recently, an old friend, someone I was once close with but rarely speak to now, posted a series of images that I can’t seem to shake. The posts equated the use of GenAI tools for something as simple as writing an email with destroying the world. As polarization grows, attention-grabbing images and headlines seem to dominate the conversation, feeding into fear.
But why did a single moment spent reading a screenshotted post linger with me, reinforcing my own hesitations about adopting this powerful technology? My theory is that my existing fears make the barriers easy to build and difficult to overcome. Join me as I explore my top concern and try to make sense of how to proceed.
A recent, yet-to-be peer-reviewed MIT Study tested the impact of allowing students to use GenAI to assist in writing essays to different degrees. Using electroencephalogram (EEG) to monitor brain activity, the researchers found that reliance on AI tools not only degraded critical thinking but also limited what was committed to deep memory. Students who used the tool became lazier, providing less input and critical thought each time they wrote.1
Futurist Sinead Bovell also challenges the idea of automating everything in our lives, giving the example of writing an email as our daily opportunity to practice deep thought, structuring, and critical thinking. If we always delegate this to AI without replacing it with another opportunity for deep thinking, we risk losing our ability to think critically.2
This concern intensifies as I watch my oldest kiddo finish kindergarten, and I try to nurture his curiosity about the world. How do I protect my kids from the potential downsides of new technologies without depriving them of valuable learning opportunities?
Perhaps more concerning is the potential for children to form unhealthy attachments to chatbots, like the addiction patterns we’ve seen with smartphones and social media. This is happening to adults in alarming ways.3 For kids, whose brains are still developing, the long-term impact could be even more significant.
While this is the concern I find myself focused on most often, it has a lot of company among my worries. The other considerations running through my mind each time I hit go on an AI tool include the environmental impacts, the loss of creativity and lucrative work for creatives, risks that come with increased attack surface and AI-powered bad actors, and the physical safety and mental health of my kids who are growing up in a world where it is increasingly difficult to determine what’s real.
Why and how I’m working through my hesitation
My initial approach was to label commentary like the screenshotted post in a social media story as virtue-signaling,4 categorizing it in a way that discredited the poster and let me scroll past. While virtue signaling might be a piece of what’s happening, the concerns behind these sentiments are valid, deserve my attention, and in the end, I’m glad they shared their opinions. I need to confront the ethics of AI use as it pertains to my life, and the complexities of these issues, including the benefits, are part of the story.
The MIT study that highlighted the risks of GenAI use to learning and critical thinking abilities also found that adding the tool at the right point could enhance learning, rather than diminishing it.5
Bovell’s conclusion isn’t to steer clear of AI at all costs because of the risk. Instead, she calls for entire system changes, pointing out the importance of moving forward with intentionality and having controls in place rather than inserting AI into existing educational systems.6 We need to study how to help kids work with these tools without making lifelong sacrifices in their ability to be creative and critical or to their safety.
My other concerns have flipsides and potential benefits too, especially if we approach the use of this technology thoughtfully.
So, while ignoring my concerns or avoiding AI might be easier in the short run, I’m choosing to take the middle road: to dig deeper, learn more, and thoughtfully engage with commentary when warranted. I will be working to make the right decisions on how to incorporate AI and other technologies into my life and into my kids’ lives so they can benefit without losing the curiosity and critical eye that define them. Each time I use a GenAI tool, I will consider the true value of the task, weighing the risks and benefits.
Will you?
In a moment when even the pope is warning about AI ethics,7 I’m choosing to forge ahead with research, open conversation, and a community of colleagues working through these concerns. Like most challenges, the best way forward is together, armed with as much information as possible. Let’s keep the conversation going.
Andrew R. Chow, “ChatGPT May Be Eroding Critical Thinking Skills,” Time, June 23, 2025.
Kashmir Hill, “They Asked an A.I. Chatbot Questions. The Answers Sent Them Spiraling,” The New York Times, June 13, 2025.
Virtue-signaling is the public expression of opinions or sentiments intended to demonstrate one’s good character or social conscience or the moral correctness of one’s position on a particular issue (VIRTUE SIGNALLING definition | Cambridge English Dictionary).
Andrew R. Chow, “ChatGPT May Be Eroding Critical Thinking Skills,” Time, June 23, 2025.
Clare Duffy and Christopher Lamb, “Pope Leo calls for an ethical AI framework in a message to tech execs gathering at the Vatican,” CNN Business, June 20, 2025.
— Haley Gove Lamb, Manager within the Office of the CTO