Claude Code Skill · v1.0

Adversarial Thinking

A mirror, not a consultant.
Makes thinking harder, not easier.

Why This Exists

AI tools that think for you make your thinking worse. This one refuses to.

The Problem

Research shows a r = -0.68 negative correlation between AI usage and critical thinking (Gerlich, 2025). 83% of LLM users can't cite from texts they just produced. Passive delegation to AI reduces cognitive engagement.

The Paradox

Using an AI to improve your thinking is like using a calculator to improve your arithmetic. Unless the tool forces you to do the cognitive work, it replaces the very thing it claims to augment.

The Design Choice

This skill produces questions, never answers. It creates desirable difficulties (Bjork) — conditions that feel inefficient but produce deeper understanding. If it feels uncomfortable, it's working.

Phase 0: Your Thinking First

The skill refuses to proceed until you've written your own reasoning

Before the AI does anything, you must provide:

Your thesis The tensions you see A provisional conclusion Your open questions

Minimum 100–150 words. This is the only thing that separates cognitive augmentation from cognitive replacement.

How It Works

Five phases, from your reasoning to your answers

Your Input

0

Your Thinking

Write your thesis, tensions, conclusion, and questions. The skill won't start without this.

1

Classify

Cloud (judgment) or Clock (verifiable)? Clock problems get redirected to adversarial-verify.

The Challenge

2

Reasoning Mode

Classify each element: deductive (from principles), inductive (from examples), or abductive (best explanation).

3

Cognitive Forcing

Apply 3+ techniques from the arsenal of 9. Forced variety — no repeats.

The Mirror

4

Uncomfortable Questions

3 questions, 1 contradiction, 1 load-bearing assumption, 1 likely bias. No answers. Just the mirror.

5

Reflection Loop

After you respond, the skill mirrors what changed since Phase 0. Not judgment — awareness.

9 Cognitive Forcing Techniques

Each run uses at least 3, never repeating

1. Noise vs Signal

Separate vanity metrics from actionable metrics. "If this number changed, would you do anything differently?"

2. Assumption Excavation

Make implicit assumptions explicit. Your reasoning rests on foundations you haven't examined.

3. Pre-Mortem

"It's 12 months from now and this failed. What went wrong?" Forces prospective hindsight (Klein).

4. Scale Shift

What happens at 10x? At zero? At negative? Test the reasoning under extreme conditions.

5. Time Travel

What happens in 6 months when context has changed? What assumptions become stale?

6. Requirement Inversion

What if the user wants the exact opposite? How much of the reasoning survives?

7. Outcome vs Output

Are you measuring what you did or what changed? Most OKRs measure output disguised as outcome.

8. Steel Man

Construct the strongest possible version of the opposing argument. If you can't, you don't understand the problem.

9. Unfalsifiability Check

Can this claim be proven wrong? If not, it's not useful. "We're uniquely positioned" — unfalsifiable.

What It Challenges

Patterns that look like thinking but aren't

Buried Assumptions

Foundations of reasoning that haven't been examined

Vanity Metrics

Numbers that feel good but don't change decisions

Moats as Market Size

"The market is big" is not a competitive advantage

Survivorship Bias

Citing successes without counting failures

Unfalsifiable Claims

"We're uniquely positioned" cannot be proven wrong

Output as Outcome

Shipping features is not the same as creating value

Research Foundation

Built on cognitive science, not heuristics

Installation

Option 1 — Clone & Copy

git clone https://github.com/fullo/claude-adversarial-thinking.git
cp -r claude-adversarial-thinking/skills/adversarial-thinking ~/.claude/skills/

Option 2 — Claude Marketplace

claude marketplace add fullo-plugins https://github.com/fullo/claude-plugins-marketplace

claude plugin install adversarial-thinking@fullo-plugins

Compatibility

Claude Code Cursor Windsurf Cline

Works with any tool that supports the Agent Skills format.

Usage

Start with /adversarial-thinking or ask naturally

/adversarial-thinking
challenge my thinking on this strategy
poke holes in my business plan
stress test my idea
are my OKRs measuring the right things?
play devil's advocate on this decision

Companion Skill

For verifiable artifacts (code, data, schemas, docs, tests), use the companion skill

Adversarial Verify →