This article is part of The Complete Guide to Using AI for Australian University Study, our deep-dive hub covering policies, tools, citations and what’s actually allowed at Australian unis.

The news hit the edtech world like a sledgehammer. Between 2022 and 2024, Chegg lost more than 1.5 million subscribers (Chegg investor releases). Their stock price collapsed 99% from its pandemic peak. The company that once dominated student study help was suddenly fighting for survival.

But here’s the thing, Chegg’s collapse wasn’t just bad luck or market conditions. It was inevitable. They built their entire business on selling answers, not learning. And when ChatGPT came along offering the same service for free, students jumped ship faster than you could say “academic integrity violation.”

The real story isn’t about one company failing. It’s about a fundamental shift in how educational technology works, and what that means for students choosing tools to actually help them succeed.

The Answer-Selling Business Model

At its peak in February 2021, Chegg was worth nearly $15 billion (Macrotrends), built on a simple premise: students need homework answers, and they’ll pay for them. For years, it worked brilliantly. Subscribe to Chegg Study, search for your textbook problem, get the solution. Easy money.

But let’s be honest about what this actually was. Chegg wasn’t helping students learn, they were helping them avoid learning. The platform became synonymous with academic shortcuts. University professors started recognising “Chegg solutions” in submitted work. The company faced constant criticism for enabling academic dishonesty.

Still, the subscription model printed money. Until it didn’t.

ChatGPT Changed Everything

When ChatGPT launched in late 2022, it didn’t just disrupt Chegg, it exposed the fundamental flaw in their business model. Why pay $15 monthly for homework answers when AI would solve your problems for free?

Students made the switch en masse. Chegg’s subscriber numbers plummeted. The company’s attempts to pivot, launching their own AI tutor, restructuring, even laying off staff, came too late. They’d spent a decade training their users to want answers, not understanding. ChatGPT simply delivered those answers more efficiently.

Meanwhile, other platforms felt the same pressure. Quizlet shuttered their AI tutor product. I’ve compared what replaced them in ChatGPT vs GradeMap vs Bloom AI, which AI study tool actually helps?. As ‘Nobody is blind to it’: mass cheating through AI puts integrity of Australian degrees at risk reported in The Guardian, the entire “homework help” industry found itself competing with free AI that could solve problems, write code, and explain concepts without the monthly fee.

Universities Are Redesigning Assessment

Here’s where it gets interesting for Australian students. While companies like Chegg were doubling down on answer-providing, universities were already moving away from assessments that could be solved with simple lookups.

The shift started before AI went mainstream, but ChatGPT accelerated it dramatically. Universities recognised that if a chatbot could ace your assignment, the assignment wasn’t measuring learning. It was measuring access to the right search terms.

Now we’re seeing assessment redesign across the sector. TEQSA explicitly supports assessment reform, advocating for authentic tasks, portfolio-based evaluation, oral presentations, and in-class components, the kind of work where understanding matters more than the right answer. Based on interviews conducted during our product validation research, students are noticing this shift. The old “find the answer online” approach simply doesn’t work anymore.

This creates a problem for tools built around answer-providing. And an opportunity for tools built around learning support.

The Learning vs. Cheating Divide

The distinction between learning support and academic shortcuts isn’t always obvious. But it’s becoming the defining line in educational technology.

Learning support helps you understand concepts, interpret requirements, and develop your own responses. It’s like having a knowledgeable friend explain things until they click. The work you submit is still yours, you just had better support getting there.

Answer-providing gives you the solution directly. Copy, paste, submit. The learning happens to someone else (if at all). The work you submit isn’t yours in any meaningful sense.

Universities have caught onto this distinction. Most Australian universities now permit AI use for learning support, brainstorming, concept explanation, study revision, though policies vary by course and institution. The line is generally drawn at submitting AI-generated work without acknowledgment. It’s coaching, not completion, see using AI for university study without cheating for the practical playbook.

What This Means for Students

If you’re choosing study tools in 2026, the Chegg collapse offers some valuable lessons.

First, tools that help you cheat won’t help you learn. Even if you get through individual assignments, you’ll struggle in exams, practical applications, and subsequent subjects that build on concepts you never actually understood. The shortcut becomes a long-term academic dead end.

Second, assessment is evolving faster than many students realise. The strategies that worked for students five years ago: find the answer, paraphrase, submit, increasingly don’t work. Universities are designing tasks that require genuine understanding and original thinking.

Third, the tools that survive will be the ones aligned with how learning actually works. Not the ones that bypass it.

This is why I’m building GradeMap differently. Instead of providing answers, it’s designed to coach understanding. It helps you interpret assignment rubrics, understand what markers want, and develop your own responses. The goal isn’t to complete your work. It’s to help you complete it better yourself.

The Australian Context

Australian universities are handling the AI transition more thoughtfully than many international counterparts. Rather than blanket bans or detection-focused approaches, most have developed frameworks that distinguish between legitimate AI use and academic misconduct.

UNSW’s guidance permits AI as a learning support tool when properly attributed. La Trobe has partnered with OpenAI to roll out ChatGPT Edu, starting with 5,000 licences in 2026 and scaling to 40,000 by 2027. The University of Sydney’s AI policy provides a clear framework for distinguishing legitimate AI use from academic misconduct. Monash provides free Copilot access. The sector recognises that AI literacy is becoming as important as digital literacy was a decade ago.

But this creates new challenges for students. The old rules were simple: don’t plagiarise, don’t collaborate unless permitted. The new rules are more nuanced: use AI for learning, acknowledge its use, don’t submit its output directly. Navigation requires more judgment, not just rule-following.

Students who understand this distinction, who can use AI tools for genuine learning support while maintaining academic integrity, have a significant advantage. Those who don’t often find themselves inadvertently crossing lines they didn’t know existed.

Looking Forward

The edtech landscape is consolidating around a new reality. Tools that sell shortcuts are competing with free AI. Tools that support learning are finding new opportunities as universities redesign assessment and students need better study strategies.

For students, this means being more intentional about the tools you choose. Ask yourself: is this helping me understand the material better, or just helping me avoid understanding it? Am I developing skills I’ll need next semester, or just getting through this assignment?

The companies that succeed in this new environment will be those that genuinely help students learn. Not the ones that help them avoid learning. Chegg’s collapse isn’t just about one company’s struggles. It’s a signal about what the future of educational technology looks like.

And that future looks a lot more focused on actual learning than the past decade has been.

References

This analysis is based on interviews conducted during product validation research, 2026.

The Guardian. (2024). ‘Nobody is blind to it’: mass cheating through AI puts integrity of Australian degrees at risk.

TEQSA. (2024). Gen AI, academic integrity and assessment reform. Tertiary Education Quality and Standards Agency.

University of Sydney. (n.d.). Artificial intelligence. Academic Integrity.

What does this mean for current students?

Choose study tools that help you understand concepts, not just complete assignments. The skills you develop now will matter more than the grades you get through shortcuts.

Are AI tools like ChatGPT considered cheating?

Not for learning support, every Australian university permits AI use for study, revision, and concept understanding. The line is drawn at submitting AI-generated work without acknowledgment.

How can I tell if a tool supports learning or enables cheating?

Ask yourself: after using this tool, do I understand the material better, or do I just have an answer? Learning tools teach you; cheating tools do the work for you.