I remember, in my infancy, I had access to only a few old books at home. The new ones either weren’t available or were too expensive to acquire. If I wanted to read good content back then, I had to walk to the local library and do it on-site.
Later on at university, I observed that professors were constantly reading research papers.1 They were respected for the sheer amount of knowledge they had accumulated over a lifetime. We valued them because their knowledge and expertise were targeted and scarce.
Now, in my 30s, reading books is not as trendy as before, knowledge and papers are available from anywhere. Indeed, getting access to information is cheaper than ever, and even the latest scientific advances move at lightspeed. A curious 10-year-old today can access cutting-edge tools and learn from the latest scientific discoveries, right from home!
More remarkably, AI has drastically changed the way we value knowledge in the last few years. Everyone can now prompt an AI assistant (like ChatGPT) for free and get almost instant answers to practically any imaginable question. I believe AI represents an optimization milestone in the way we search for human knowledge. The impact could be even more than what the Google search engine did in the early days of the internet.
AI has made answers abundant and general knowledge intuitively accessible. However, current AI models still struggle to figure out how to apply, synthesize, and adapt existing knowledge to create something genuinely new. This limitation is a consequence of the way current Large Language Models (LLMs) operate.
With the rise of more advanced reasoning models, the ability to formulate the right questions to drive AI in the right direction is a skill we need to incorporate into our everyday lives. AI has shifted our focus from trying to find answers to thinking about what and how to formulate the right questions.
I’d argue that as the value of getting answers goes down, the value of formulating new original questions that drive actual action should go up. So, coming up with original and relevant questions represents a real competitive edge for everyday tasks, i.e., a real differentiator.2
What’s more, I believe that the perceived value of those who have mastered the “art of asking questions” will increase accordingly. Just look at the rise of professions that rely heavily on questioning skills such as podcasters and interviewers. These guys have mastered the ability to ask good questions and extract value for their audience.
Today, I see myself doing a conscious effort at becoming better at the art of asking questions. This post is about techniques, patterns and anti-patterns I’ve learned over time from this practice. Let’s dive in!
👉 Check out my compilation of Job Interview Questions.3
When to Ask Questions?
Short answer: every time you can!
Back in the days, asking frequent and awkward questions was considered an annoying practice in many cases. Why? I’d argue that our inherent human laziness had something to do with it. Leaving a question hanging around meant somebody had to squeeze his brain to find an answer (i.e., to do mental work). Otherwise, an uncomfortable void of uncertainty threatened to emerge.
But this is no longer the case.
AI has completely removed the fear of “feeling stupid” for both sides: the asker and the responder. The more we use AI, the more comfortable we become with the idea of questioning everything. In meetings or events, asking is a signal of attention, and in an era where AI can respond fast, it also signals the ability to challenge the status quo and express a personal opinion, which are human traits with increasing value.
The more questions you ask, the better prepared you will be for whatever is coming next.
Techniques
Good questions have purpose and “make sense” (in general).4 They are unambiguous, give just enough context, and set limits.
Scope is reduced so that answer feel smaller. The question itself must strip away everything irrelevant. As a rule of thumb, if a the question doesn’t make any difference to the argument, it needs to be sharpened.
There are three techniques to improve question formulation:
- Aim, scope, and payoff (ASP)
- Clarity, context, and constraints (3C)
- Falsifiability and measurability (FM)
Aim, Scope, Payoff (ASP)
Without aim you wander. Without scope you boil the ocean. Without payoff you can’t act. A good question invokes change. The faster the change, the more effective the question. Ideally, you want to change something in the next 10 minutes.
So, the core idea of this technique is to state what you’re trying to achieve (aim), how far you’ll look (scope), and what you’ll do with the answer (payoff).
Element | Description | Template |
---|---|---|
Aim | What you want to achieve. | “We want to __.” |
Scope | Boundaries of your inquiry (data, time, users, tools). | “Within __ (data/time/users/tools).” |
Payoff | What you’ll do with the answer. | “So we can __ (decision/action/experiment).” |
Examples:
❌ Before (Vague) | ✅ After (ASP Applied) |
---|---|
Product: “How do we improve onboarding?” | Aim: Increase week-1 activation. Scope: Mobile iOS, signup flow only. Payoff: Decide which of 3 changes to A/B test. Question: “What single change in the iOS signup flow would most increase week-1 activation, and which 3 options should we A/B test first?” |
Research: “Is our model good?” | Aim: Decide to ship or retrain. Scope: Fraud model v2, last 30 days. Payoff: Go/no-go. Question: “Given the last 30 days, does fraud-model-v2 beat fraud-model-v1 on precision by ≥2pp at equal recall, so we ship or retrain?” |
Personal: “How can I get healthier?” | Aim: Improve sleep. Scope: Next 14 days, bedtime routine only. Payoff: Adopt one habit. Question: “Which single bedtime habit should I try for 14 days to raise average sleep by 30 minutes?” |
Clarity, Context, Constraints (3C)
Clarity prevents misreads, context prevents wheel-reinvention, constraints prevent infinite “it depends.” You want to add just enough context to the question to make it sound enough, at the same time that you force a direct answer to it.
The core idea of the 3C tecnique is to make the question unambiguous, include the background that matters, and set limits that force trade-offs.
Element | Description | Example |
---|---|---|
Clarity | Define terms, units, and success. Prefer nouns/verbs over adjectives. | “Write-heavy database workload (~5k writes/sec), 99.9% latency <10ms” |
Context | Minimum viable backstory: objective, prior attempts, relevant data. | “Client churn up 3% in SMB last quarter” |
Constraints | Time, budget, tools, risk tolerance, guardrails. | “EU-only, managed service, must decide by Friday” |
Quick rewrites:
❌ Before (Vague) | ✅ After (3C Applied) |
---|---|
“What’s the best database?” | “For a write-heavy workload (~5k writes/sec), 99.9% latency <10ms, EU-only, managed service, what database should we evaluate first and why?” |
“How should I learn LLMs?” | “With 5 hours/week for 6 weeks and access to GCP, what learning plan gets me from zero to fine-tuning a small model on our support tickets?” |
“Can we migrate quickly?” | “With a 2-month deadline, 3 engineers, $30k budget, and zero downtime tolerance, can we migrate the existing pipeline from on-prem to BigQuery?” |
Falsifiability and Measurability (FM)
Decisions stick when they survive attempts to disprove them. Good questions drive rapid decisions. It’s always easier to give a boolean answer when enough context is provided. Measurements is one of the best ways to give such a context. With measures, one can turn opinion into actionable feedback.
The core idea of the FM technique is to phrase questions so answers can be tested. If it can’t be wrong, it can’t be right.
Pattern | Description | Example |
---|---|---|
Hypothesis form | State belief, expected outcome, magnitude, audience, and timeframe. | “We believe shortening the signup form from 7 to 4 fields will raise iOS activation by 3–5% for new users within 14 days.” |
Acceptance criteria | Define clear pass/fail metrics. | “Ship the model to production if precision ≥ 0.92 with recall ≥ 0.55 on June data.” |
Disconfirmers first | Identify what would prove the belief false. | “If activation does not increase by ≥1% after 7 days with 95% CI, abandon the change.” |
Quick Rewrite:
❌ Before (Vague) | ✅ After (FM Applied) |
---|---|
“Will the new pricing work?” | “In a 50/50 geo split for 21 days, does new pricing increase gross margin per user by ≥4% with no more than a 1pp drop in conversion?” |
“Is the model good enough?” | “On the August dataset, does the model achieve F1 ≥ 0.82 and AUC ≥ 0.9, with inference latency ≤ 120ms on 95% of requests?” |
“Should we improve onboarding?” | “For new users signing up in September, does reducing onboarding steps from 5 to 3 increase 7-day retention by ≥6% without lowering NPS?” |
Patterns
Reusable question patterns help you think more clearly, spot blind spots, and get better answers - faster.
Here are eight patters you can borrow and use everyday.
First-principles (“What are the primitives?”): Strip away jargon and complexity until you get to the most basic truths. From there, rebuild your understanding. This works because reality is easier to reason about than opinions.
Inversion (“How could this fail?”): Instead of only asking how to succeed, flip the question and look for ways it could go wrong. Anticipating failure is often the fastest way to avoid it.
Disconfirming evidence (“What would prove me wrong?”): Ask what would make your current belief false. This keeps you from cherry-picking facts that only confirm your view.
Assumption audit (“What am I taking for granted?”): List the things you believe are true without checking. Questioning assumptions often reveals the weakest part of your thinking.
Constraint lens (“What if we had half the time/budget?”): Imagine having fewer resources. Constraints force creativity, sharpen priorities, and surface shortcuts you might have missed.
Comparative calibration (“Compared to what?”): Numbers and claims mean little without context. Always anchor them to a baseline, a competitor, or a past result.
Decomposition (“Can we split this into 3 parts?”): Break a big problem into smaller, more manageable chunks. Solving each piece separately is often faster and less overwhelming.
Time travel (pre-mortem/post-mortem): Jump forward in time. In a pre-mortem, imagine the project has failed and ask why. In a post-mortem, imagine it has succeeded and trace back the steps that led there.
Anti-Patterns
Some questions don’t just fail to help but actually backfire. They actively distort the truth or shut down useful discussion.
Here are four common traps to avoid:
Loaded and leading questions: These questions sneak in assumptions or push the respondent toward a certain answer. For example, “Why is our onboarding so bad?” assumes it is bad. Instead, ask neutrally: “How does our onboarding compare to expectations?”
Double-barreled and vague scope: Two questions in one confuse people and muddy the answer. “How do we improve onboarding and reduce churn?” is really two separate discussions. Similarly, asking with no clear scope, like “What’s the best database?”, leads to endless “it depends.” Split them and define the boundaries.
Why-blame vs. how-fix framing: Asking “Who messed this up?” shifts focus to defending reputations instead of solving problems. “How can we prevent this next time?” keeps the discussion forward-looking and solution-oriented.
Scope creep: Two questions in one confuse people and muddy the answer. “How do we improve onboarding and reduce churn?” is really two separate discussions. Similarly, asking with no clear scope (like “What’s the best database?”) leads to endless “it depends.” Split them and define the boundaries.
How to Get Better
For me, one of the best ways to learn the art of asking great questions is by listening to the most popular podcasters out there. Think about it for a second, their entire job revolves around asking the right questions to the right people.
What I usually do is keep track of the best questions they ask (I have a Notion template ready for this!). I note down the exact wording they use, the follow-up questions they make, and even the flow of their thought process when “connecting the dots.” Over time, this has helped me sharpen my own questioning skills (a lot).
Here are some of my favorite podcasters who have an interview-first and long-content format:
Footnotes
Turns out my childhood instincts were correct: reading was important! ↩
We tend to tolerate lower-quality answers from humans than from AI. In many cases, it’s less about the exact answer and more about the mental mechanism you used to get there. ↩
I read “Who” by Geoff Smart a few moons ago. It’s based on 1,300 hours of CEO interviews about hiring. NYT bestseller. Great questions and mental models. Recommended! ↩
I know, whether something “makes sense” is wildly subjective… but let’s at least agree it should make sense to the interviewee. ↩