The next big skill may not be coding — it may be asking better questions.
In a January 2025 podcast with Cleo Abram, Nvidia CEO Jensen Huang said that if he were a student today, he would focus less on learning to code and more on learning how to interact effectively with AI systems like ChatGPT. His comment quickly went viral, triggering an intense debate across social media platforms.
Huang clarified that coding is not obsolete. Instead, he argued that students should prioritise mastering how to work with AI tools to become better professionals — whether as lawyers, doctors, engineers, or scientists. According to him, interacting with AI is similar to developing the art of asking better questions. Prompting, he noted, requires creativity, structure, and critical thinking.
The conversation online quickly evolved into a broader concept:AI fluency.
AI fluency goes beyond typing a question and copying an answer. It involves knowing how to break complex problems into smaller parts, iterating through responses, applying domain expertise, and critically evaluating AI outputs. It means understanding when to trust the system — and when to challenge it.
The real formula emerging today appears to be:
Domain Expertise × AI Fluency = Leverage
Without subject knowledge, AI outputs can be misleading. Without AI fluency, users barely scratch the tool’s potential. If this trend continues, the real divide in industries may not be AI vs non-AI professionals — but those who can direct AI intelligently versus those who cannot.
One interesting insight from the debate was the idea of a “negative prompt” — telling AI not just what to do, but what to avoid. Avoid clichés. Avoid weak arguments. Avoid oversimplifications. Avoid outdated precedents.
This approach forces deeper thinking before even typing a query. Done properly, prompting becomes a structured cognitive exercise rather than intellectual laziness.
However, most users still treat AI like a faster search engine — ask once, get an answer, move on. That approach limits learning and increases dependency without understanding.
Critics argue that coding builds structured reasoning, abstraction, and precision. Replacing deep technical engagement with prompt-writing could weaken cognitive skills.
Coding teaches how systems function at a fundamental level. If AI writes the code and humans merely describe outcomes, are we losing an essential layer of understanding?
Supporters counter that value is shifting upward — from execution to judgement. As AI increasingly handles code generation, humans will focus on defining problems, validating outputs, and integrating systems responsibly.
The rise of “vibe coding” and AI-assisted programming reflects this transition in real time.
AI systems are powerful — but not flawless. Even a small error rate can be unacceptable in fields like medicine, law, finance, or aviation. If every output must be verified, the role of human expertise becomes even more critical.
AI fluency without domain knowledge can be dangerous. You cannot critique what you do not understand. And if future generations focus only on prompting AI rather than mastering fundamentals, who becomes the expert 15 years from now?
AI does not generate original knowledge; it recombines patterns from human-created data. If human expertise declines, the quality of AI systems could eventually suffer as well.
Importantly, Huang did not advocate abandoning deep learning. He emphasised that AI should help individuals become better experts — not replace expertise altogether. But the viral nature of his remark shows how easily nuance can be lost in online debates.
Reducing the shift to “better prompts” oversimplifies the change underway. Prompt templates can be learned quickly. True AI fluency is more subtle.
An experienced lawyer directing AI to stress-test an argument, avoid weak precedents, and identify logical gaps demonstrates something deeper than surface-level prompting. That level of interaction requires professional judgement.
The competitive question today is not whether AI will replace you. It is whether someone who knows how to use it better will outperform you.
For students and professionals alike, the future may not be coding versus no coding — but coding plus AI fluency. Or law plus AI fluency. Or medicine plus AI fluency.
The divide is already forming — between those who simply use AI, and those who know how to direct it critically, creatively, and cautiously.