When AI Becomes a Crutch: Why the Real Risk Falls on Young Learners - and May Hit Struggling Communities Hardest

Alan Marley • March 18, 2026
When AI Becomes a Crutch — Alan Marley
Education & Technology

When AI Becomes a Crutch: Why the Real Risk Falls on Young Learners

For adults who already know how to think, AI is a force multiplier. For students who have not built those skills yet, it may be something more dangerous - a shortcut that looks like progress.

Artificial intelligence is not the enemy. I use it constantly. For adults who already know how to think, write, research and solve problems, AI can be a genuine force multiplier - saving time, sharpening structure and speeding up routine work. But that is not the same as education. The problem is not whether AI is useful. It is. The problem is what happens when students lean on it before they have built the basic skills it is supposed to support. A professional using AI to accelerate work they already understand is one thing. A high school or college student using it to replace reading, thinking, outlining and writing is something else entirely.

That is where the concern gets serious. It is a concern for all young people still learning. But it may be especially dangerous for groups that have historically struggled more with graduation rates, academic achievement, chronic absenteeism and other barriers tied to family and environment. The issue is not race in isolation. The issue is whether AI becomes a shortcut layered on top of already shaky foundations.

— ✦ —

AI Can Help - But Only When It Supports Learning Rather Than Replacing It

Used correctly, AI can be a good educational tool. It can explain concepts in plain language, generate practice questions, give instant feedback, help multilingual learners and offer individualized support that many students do not get consistently in overcrowded classrooms. OECD has noted that generative AI can improve learning and foster critical thinking, creativity and collaboration when used with clear pedagogical intent. UNESCO has argued similarly that AI can help address major educational challenges when implemented with human-centered safeguards.

That is the best-case version. The worst-case version is different. In that version AI does not support learning - it replaces it. The student no longer wrestles with the reading, no longer organizes the argument, no longer chooses evidence, no longer revises language and no longer develops the mental endurance that real education requires. Brookings recently warned that when children replace effortful learning with generative AI to shortcut assignments, it harms cognitive development. Research from Microsoft has linked higher AI use and cognitive offloading to weaker critical thinking. That is the line that matters. AI as tutor can help. AI as substitute hollows students out.

A student who cannot independently identify a weak argument is not ready to outsource writing to a machine. A student who cannot evaluate a source is not ready to let a chatbot summarize research. If you have not built the muscles, offloading to AI does not free you. It just stops the muscles from forming.

A Diploma Is Not the Same as Mastery

One reason this issue matters so much is that schools already struggle with the gap between credentialing and competence. The national public high school adjusted cohort graduation rate was 87 percent in 2021-22, with rates of 90 percent for White students, 83 percent for Hispanic students and 81 percent for Black students. Those are not collapse-level numbers but they show meaningful gaps. And graduation rates do not tell the whole story. NAEP continues to document persistent achievement gaps between Black and White students and between Hispanic and White students in reading and math. NCES describes these as statistically significant and has tracked them over time. In other words, a diploma can coexist with weak academic readiness - and has been doing so for years.

That is why AI introduces a new problem. If schools are already passing too many students who have not fully mastered the material, AI makes that easier. It produces cleaner prose, better formatting and more polished answers than the student could generate alone - which means students may look more competent on paper than they actually are. The danger is not just cheating. The danger is social promotion with better grammar.

Why This Could Hit Some Groups Harder

This is where the conversation gets uncomfortable but needs to stay honest. Recent Pew research found that Black and Hispanic teens are more likely than White teens to report frequent chatbot use for schoolwork and more likely to say they do all or most of their schoolwork with chatbot help. About six in ten Black or Hispanic teens reported using chatbots for schoolwork, compared with roughly half of White teens. Earlier Pew reporting found daily chatbot use among Black and Hispanic teens running above the level reported by White teens.

That does not prove harm by itself. It does raise a serious possibility. If a population is already more likely to face academic headwinds, and also more likely to rely heavily on AI for schoolwork, the technology may end up reinforcing weakness instead of curing it. The point is not that Black or Hispanic students are less capable. The point is that students who begin from a weaker position are more vulnerable to using AI as a crutch instead of a ladder. That is true for poor White students too. It is true for anyone whose school preparation is thin, whose home environment is chaotic, whose attendance is weak or whose reading and writing skills are already behind. The risk is not racial destiny. The risk is compounded fragility.

Environment Still Matters - Maybe More Than Ever

Educational outcomes are shaped by family income, parental education, attendance, mental health and the broader home environment. NCES states plainly that living in poverty, living in a single-parent household or living in a household where no parent completed high school are all associated with lower achievement and higher dropout risk. More than 14 million students were chronically absent in 2021-22, missing at least 10 percent of school days. RAND reported that chronic absenteeism remained a major national problem into 2024-25 even after some improvement.

CDC research links poor adolescent mental health to lower grades and weaker decision-making, and emphasizes that students who feel disconnected from school face higher risk across a range of outcomes. When students are disconnected, anxious, absent or overwhelmed, they are more likely to reach for shortcuts. AI is an exceptionally tempting shortcut in that environment. It is not landing in a neutral classroom full of equally prepared students. It is landing in a country where many young people are already struggling with weak literacy, inconsistent attendance, family instability and uneven school quality.

Dependency Before Competence Is the Core Problem

Adults who earned real competence before AI generally know when the machine is helping and when it is bluffing. Teenagers and undergraduates often do not. That is the heart of the issue. A student who has never built the muscles of concentration, revision and critical judgment is likely to offload those tasks before those muscles even form. That is not empowerment. That is dependency dressed as efficiency.

Brookings has argued that the future of AI in education depends on whether systems are designed to help students prosper, prepare and protect themselves in an AI world. OECD has made a similar point, stressing that effective use depends on pedagogy, safeguards and human judgment. Those are not abstract concerns. They go directly to whether AI builds capability or erodes it. If schools do not draw that line clearly, the likely result is predictable: the strongest students will use AI as leverage while the weakest students will use it as concealment. That is how inequality widens under the banner of innovation.

This Is Not an Argument for Banning AI

Trying to ban AI outright would be foolish. Students are going to use it. Professionals already do. Employers will expect familiarity with it. The goal should not be to pretend the technology does not exist. The goal should be to force visible thinking.

That means assignments that show process, not just polished output. More in-class writing. More oral defense. More staged drafts. More source verification. More reflection on why students made certain choices. If a student submits an elegant paper but cannot explain the argument, defend the evidence or revise the logic without machine help, then the learning did not happen. AI should be permitted as a tool but not as a replacement for cognition. That distinction matters for everybody - affluent students, poor students, students of every background. But it matters especially for young people whose educational footing is already unstable, because they have more to lose from fake competence than anyone else.

My Bottom Line

AI is not automatically making students smarter and it is not automatically making them dumber. It depends entirely on how they use it and when. For learners who already have strong foundations, AI can be a genuine asset. For learners who are still building those foundations - especially in communities already dealing with lower academic readiness, chronic absenteeism, weaker family support and uneven school quality - AI can become a dangerous illusion. It can help them pass without helping them grow.

A society that hands out credentials without competence is setting young people up for failure. The short-term result is more passing, more graduating and better-looking work on paper. The long-term result is adults who have been told they are ready when they are not. That is bad for employers, bad for colleges, bad for civic life and worst of all bad for the very students the system claims it is helping.

If AI becomes a mask for weak learning instead of a support for real learning, it will not close opportunity gaps. It will hide them until reality shows up.

References

  1. Brookings Institution. (2026, January 14). AI's Future for Students Is in Our Hands.
  2. Brookings Institution. (2026, January 14). A New Direction for Students in an AI World: Prosper, Prepare, Protect.
  3. Brookings Institution. (2026, January 20). Do AI's Risks Outweigh the Benefits for Students and Schools?
  4. Centers for Disease Control and Prevention. (2024, November 18). School Connectedness Helps Students Thrive.
  5. Centers for Disease Control and Prevention. (2024, November 29). Youth Mental Health: The Numbers.
  6. Lee, H. P., Sarkar, A., Tankelevitch, L., Drosos, I., Rintel, S., Banks, R., & Wilson, N. (2025). The Impact of Generative AI on Critical Thinking. Microsoft Research.
  7. National Center for Education Statistics. (2024). High School Graduation Rates.
  8. National Center for Education Statistics. (2024). Achievement Gaps.
  9. National Center for Education Statistics. (2024). Chronic Absenteeism.
  10. National Center for Education Statistics. (n.d.). Young Adult Educational and Employment Outcomes by Family Socioeconomic Status.
  11. OECD. (2026). OECD Digital Education Outlook 2026.
  12. Pew Research Center. (2025, December 9). Teens, Social Media and AI Chatbots 2025.
  13. Pew Research Center. (2026, February 24). Demographic Differences in How Teens Use and View AI.
  14. RAND. (2025, August 14). Chronic Absenteeism Still a Struggle in 2024-2025.
  15. UNESCO. (2023, updated 2026). Guidance for Generative AI in Education and Research.

Disclaimer: The views expressed in this post are the personal opinions of the author and are offered for educational, commentary and public discourse purposes only. They do not represent the positions of any institution, employer, organization or affiliated entity. Nothing in this post constitutes legal, financial, medical or professional advice of any kind. References to public figures, institutions, historical events and current affairs are based on publicly available sources and are intended to support analysis and argument, not to state facts about any individual's character, intent or conduct beyond what the cited sources support. Commentary on political and cultural subjects reflects the author's independent analysis and is protected expression of opinion. Readers are encouraged to consult primary sources and form their own conclusions. Any resemblance to specific individuals or situations beyond those explicitly referenced is coincidental.

By Alan Marley March 16, 2026
By Alan Marley March 15, 2026
By Alan Marley March 14, 2026
By Alan Marley March 13, 2026
The body content of your post goes here. To edit this text, click on it and delete this default text and start typing your own or paste your own from a different source.
Show More