This post hits home in an emotional way for me. I’m a huge advocate in trying to convince people that an over reliance on AI is a huge detriment to the brain development of younger generations . I personally limit how much I use it, but I know plenty of people who don’t think it’s something to worry about and it hurts to see. Awesome post, really appreciate the effort and insight.
Thanks Jonathan. In the long run the over reliant users are hurting their future prospects. Im a big believer in learning from each other where we can.
One thing students forget - Google, et. al. leave a digital watermark in the output. SynthID, in Google's case, can easily be tracked. (Think about when people used to ask for "Track Changes Log" in Word. Old School method for those students who you knew their ouput didn't match their class participation.)
*Luddites were about preservation of wages and quality rather than an outright rejection of technology. Nothing wrong with your usage, just a small detail regardng historical context.
Yes I was aware of the detail but opted for the popular perception of the movement. Thank you for pointing out though. The ultra curious will do more research now.
Your determination and will took you to where you aimed . It is not easy but you made it and you are proud of your accomplishments. Every one of us should be proud of their accomplishments not the accomplishments depending mostly on AI . It is a thin line and difficult to know when to stop and what is acceptable. We will keep learning and researching and not only following the trend.
Very good post at outlining the issues. Cuppla things:
My Critical Thinking bone was vibrating. This is an advocacy piece, not an analysis piece. Why do I say that? Because it did not do what Karl Popper (rightly) says we must, which is to state the counterargument(s), and try to prove them. For example, there is an argument that professors are not doorknobs, so to speak, and that we will (actually already are) adapting our instruction to ensure that AI tools do not shrivel our students' brains, destroy our profession and make us all jobless. It would make sense to list some of the solutions.
In fact, you mention this, which is a great list of how to overcome the problem: "appropriate use requires discipline, metacognition, and a clear understanding of learning goals. Qualities that most students don’t yet possess and that we’re not systematically teaching them." How do you propose we teach these qualities?
Great article, Antowan. I’m on the side that doesn’t see AI as the enemy, but as an opportunity to relearn, reskill, and upskill.
In a way, we educators are like LLMs in our own fields. We’re accumulations of knowledge built over time. That depth is powerful, but it can also make us biased or overly confident in our frameworks. I’ve been using AI intentionally to challenge my own blind spots and push my teaching beyond my default patterns.
Overreliance is a real risk, especially if speed replaces understanding. But ignoring AI is also risky. If we don’t engage with it critically now, we risk falling behind, not just technologically, but in how we prepare students for the labor market they’re actually entering.
I agree. My biggest concerns come from strategic decision makers and students learning behavior. Im not necessarily anti AI, but AI cannot teach anything that may have been gained from interacting with another human. In that interaction something more is gained that we cannot get from llms. A connection. I have tried having deep conversations with llms and they felt hollow. Which is why the big push of it onto the younger generations bother me. Furthermore companies and capital owners are throwing caution to the wind in it's development but at the same time blaming law makers for not making regulations and blaming them for regulating their energy use. My only hope is that the lawsuits are fruitful in getting damages for the families who have most been negatively impacted.
I’m very pro-AI, and I still agree with you on an important distinction. Information is not the same as formation. AI can process, summarize, and simulate depth. What it cannot replicate is the relational friction of a real human exchange, where emotion, accountability, and presence shape judgment.
For me, the path forward is not slowing AI down but being more intentional about how we embed it. Students should use it to stretch their thinking, not to bypass the struggle that builds it.
And on incentives, I agree with your concern. When capital accelerates deployment while social and energy costs lag behind, the imbalance becomes obvious. Innovation and responsibility cannot move at different speeds forever.
Great post Antowan. In a recent discussion with the Decode Econ Faculty Network, this post was requested.
Im glad I could be a part of the discussion
This post hits home in an emotional way for me. I’m a huge advocate in trying to convince people that an over reliance on AI is a huge detriment to the brain development of younger generations . I personally limit how much I use it, but I know plenty of people who don’t think it’s something to worry about and it hurts to see. Awesome post, really appreciate the effort and insight.
Thanks Jonathan. In the long run the over reliant users are hurting their future prospects. Im a big believer in learning from each other where we can.
One thing students forget - Google, et. al. leave a digital watermark in the output. SynthID, in Google's case, can easily be tracked. (Think about when people used to ask for "Track Changes Log" in Word. Old School method for those students who you knew their ouput didn't match their class participation.)
*Luddites were about preservation of wages and quality rather than an outright rejection of technology. Nothing wrong with your usage, just a small detail regardng historical context.
Yes I was aware of the detail but opted for the popular perception of the movement. Thank you for pointing out though. The ultra curious will do more research now.
Hopefully we both piqued someone's interest.
Your determination and will took you to where you aimed . It is not easy but you made it and you are proud of your accomplishments. Every one of us should be proud of their accomplishments not the accomplishments depending mostly on AI . It is a thin line and difficult to know when to stop and what is acceptable. We will keep learning and researching and not only following the trend.
Agreed on every part. I have a bad habit of downplaying and undercutting my achievements im trying to do better.
Very good post at outlining the issues. Cuppla things:
My Critical Thinking bone was vibrating. This is an advocacy piece, not an analysis piece. Why do I say that? Because it did not do what Karl Popper (rightly) says we must, which is to state the counterargument(s), and try to prove them. For example, there is an argument that professors are not doorknobs, so to speak, and that we will (actually already are) adapting our instruction to ensure that AI tools do not shrivel our students' brains, destroy our profession and make us all jobless. It would make sense to list some of the solutions.
In fact, you mention this, which is a great list of how to overcome the problem: "appropriate use requires discipline, metacognition, and a clear understanding of learning goals. Qualities that most students don’t yet possess and that we’re not systematically teaching them." How do you propose we teach these qualities?
Great article, Antowan. I’m on the side that doesn’t see AI as the enemy, but as an opportunity to relearn, reskill, and upskill.
In a way, we educators are like LLMs in our own fields. We’re accumulations of knowledge built over time. That depth is powerful, but it can also make us biased or overly confident in our frameworks. I’ve been using AI intentionally to challenge my own blind spots and push my teaching beyond my default patterns.
Overreliance is a real risk, especially if speed replaces understanding. But ignoring AI is also risky. If we don’t engage with it critically now, we risk falling behind, not just technologically, but in how we prepare students for the labor market they’re actually entering.
I agree. My biggest concerns come from strategic decision makers and students learning behavior. Im not necessarily anti AI, but AI cannot teach anything that may have been gained from interacting with another human. In that interaction something more is gained that we cannot get from llms. A connection. I have tried having deep conversations with llms and they felt hollow. Which is why the big push of it onto the younger generations bother me. Furthermore companies and capital owners are throwing caution to the wind in it's development but at the same time blaming law makers for not making regulations and blaming them for regulating their energy use. My only hope is that the lawsuits are fruitful in getting damages for the families who have most been negatively impacted.
I appreciate this nuance, Antowan.
I’m very pro-AI, and I still agree with you on an important distinction. Information is not the same as formation. AI can process, summarize, and simulate depth. What it cannot replicate is the relational friction of a real human exchange, where emotion, accountability, and presence shape judgment.
For me, the path forward is not slowing AI down but being more intentional about how we embed it. Students should use it to stretch their thinking, not to bypass the struggle that builds it.
And on incentives, I agree with your concern. When capital accelerates deployment while social and energy costs lag behind, the imbalance becomes obvious. Innovation and responsibility cannot move at different speeds forever.
Thank you for this post. It helps to promote the argument for the immediate need in AI literacy and ethical use.