You don’t need to get better at prompting AI. You just need to get more comfortable talking to it.

A small caveat before the argument: prompting has not vanished. Specifically for complex production work (agents, pipelines, anything running at scale), specificity pays off, and the vendors publish active guides that say so.

The issue is, the human methods your team is being sold in two-hour workshops as “foundational skills” — persona tricks, “act as a senior analyst,” XML scaffolding, emphasis markers — all of these rituals are better off applied by the AI itself (read about meta-prompting, a technique where an LLM generates or refines the instructions for another model).

Treating manual prompting as a “foundational” human skill in 2026 is essentially theater.

Two things that changed

Current AI can now do two things that make manual methods much less useful than they used to be.

First, AI models can now actually figure out what you mean from rough, imperfect language. You no longer have to say “Act as a senior analyst with fifteen years of experience and summarize the following report in five bullet points for an executive audience.” You can say “give me the gist of this for my boss,” and the model will understand you.

Second, and most overlooked: AI systems can now be told to “think about how they should be instructed,” which can lead to more efficient and robust solutions. This removes the cognitive load off of the user, yet allows for plans and ideas to be sharpened significantly by removing ambiguities.

I call this cognitive amplification: when a tool sharpens your thinking by asking the questions you would have asked yourself if you had more time to think.

The real question is no longer “how do I write a better prompt?” It is “how should I actually be working with AI?”

The interview is the upgrade

A search bar gives you a result shaped by the question you typed. An interviewing tool gives you a result shaped by the question you would have typed if you had thought about it for an hour.

Engaging with AI this way can feel uncomfortable at first. People expect AI to be fast, and an interview feels slower than a search bar. But it is slower the same way a good consultant is slower than Google: you wait a little longer and walk away with a better version of your own idea than the one you came in with.

Nobody complains that their consultant asked them clarifying questions. They complain when a consultant charges them for work based on a half-formed brief. AI is no different.

An AI agent that is specifically optimized to run a clean interview and then hands the result off to a fresh execution step is a must-have in 2026, to help keep the execution step from drowning in chat history and to take a first step in achieving cognitive amplification with AI. This is the core thesis for Nib agent.

The short version

Stop trying to write better prompts. Talk to AI plainly, talk to it messy. Let the AI interview you and let it do the cognitive work that will save you hours of trial-and-error for better AI outputs.

You already know how to be interviewed. The only new skill is letting the machine do the asking.

Try Nib

An interviewing prompt agent built on this thesis. Speak or type a rough idea — Nib asks the questions, then writes the optimized prompt.

Open Nib