Hacker News Tips for Mastering AI-Assisted Programming

Hacker News Tips for Mastering AI-Assisted Programming

Developers shared dozens of strategies for using AI tools like Claude Code more effectively in a recent Ask HN thread about refactoring legacy code. These tips focus on concrete workflows that improve code quality while reducing manual review time.

Use CLAUDE.md to Record Repeated Mistakes

When AI makes the same error multiple times, add it to your CLAUDE.md file. The file loads automatically at session start and helps maintain consistency across conversations.

Boris from the Claude Code team suggests keeping this file under 1,000 tokens. Remove outdated rules with each model release since newer models need less guidance. Link to other markdown files for complex instructions rather than creating one massive document.

Clear your conversation context frequently. Models lose track of instructions after several exchanges, typically around 4-5 messages. Start fresh sessions for new tasks rather than letting conversations grow too long.

Plan Before Coding

Use Plan mode (Shift-Tab twice) to outline implementation before generating code. Iterate on the plan until it matches your expectations, then execute.

Multiple developers report 2-3x better results when planning first. Have Claude write plans to disk so you can edit them manually. This prevents the model from forgetting its approach mid-implementation.

For complex features, create hierarchical planning documents:

  • High-level specification from existing code
  • Architecture document with implementation details
  • Task breakdown by feature or layer
  • Granular todo list with tight scopes

Give AI Ways to Check Its Work

Set up verification loops. Tell Claude to run tests, check browser output with Puppeteer, or validate against style guides after making changes.

Testing works especially well as guardrails. Write tests first or have Claude generate comprehensive test suites before implementation. BDD frameworks like Cucumber help prevent regressions during agent-driven development.

Create custom validation scripts that enforce your architectural rules:

  • Ensure router routes reference views, not components
  • Check that custom components replace plain HTML
  • Verify branded colors instead of generic ones
  • Confirm proper naming conventions

Use Opus 4.5 for Complex Work

The latest model shows significant quality improvements over previous versions. It follows instructions more consistently and maintains context better over longer conversations.

Opus 4.5 costs more per token but often requires fewer attempts to reach acceptable results. The Claude Max subscription provides substantial usage at a fixed monthly rate.

Break Tasks Into Small Chunks

Don’t ask AI to build entire features at once. Work on single functions, small refactors, or individual component updates.

Smaller tasks produce cleaner code that’s easier to review. You maintain better oversight when changes fit within manageable diffs. Each small success builds toward larger goals without accumulating technical debt.

Provide Examples of Good Code

Show AI what “idiomatic” means in your codebase. Paste examples of well-structured components and say “match this style.”

Multiple developers noted that generic best practices don’t capture project-specific conventions. Your examples teach patterns better than lengthy descriptions. Include both good and bad examples when possible.

Voice Transcription Speeds Prompting

Many developers report 500+ word prompts work better than terse instructions. Speaking your requirements feels faster than typing and produces more complete specifications.

Tools like Wispr Flow, Superwhisper, and VoiceInk transcribe speech to text. Map transcription to a hotkey so you can speak directly into any text field.

Rambling explanations work fine - models extract meaning from casual speech. You can describe problems while walking or thinking rather than carefully composing written prompts.

Treat Sessions Like Onboarding New Developers

Each fresh conversation starts with zero project knowledge. You wouldn’t expect new team members to instantly understand your conventions.

Explain context clearly. Reference specific files and symbols. Describe relationships between components. The more explicit your instructions, the better the output.

This analogy helps calibrate expectations. Junior developers need guidance and correction. So do AI tools. Plan for iteration and course correction rather than expecting one-shot perfection.

Reset Context Aggressively

Don’t fight context drift. When conversations go off track, start new sessions rather than trying to steer models back.

Most developers recommend resetting after 3-5 exchanges for focused tasks. Export working code and plans to files, then begin fresh conversations that read those files.

Compaction happens well before token limits. Model performance degrades as contexts grow, even with plenty of window remaining. Fresh sessions outperform long conversations.

Review Every Line of AI Output

Never merge AI code without careful inspection. Models make subtle mistakes that tests miss.

Look for:

  • Hardcoded secrets or credentials
  • Removed error handling
  • Changed logic that “fixes” tests instead of bugs
  • Unnecessary abstractions
  • Silently dropped requirements

Diff tools reveal problems better than watching code stream past. Always review complete changes, not just individual edits.

Use Voice for Discussion, Text for Code

Chat interfaces work well for architectural discussions and planning. They let you explore ideas quickly without committing to implementation.

Once you understand the approach, feed refined specifications to coding agents. This separation keeps thinking separate from typing and produces cleaner results.

Document Your Refactoring Rules

When migrating legacy code, write explicit transformation rules:

  • Which patterns to avoid
  • How to structure new components
  • What libraries to use or ignore
  • Style requirements

Feed one well-refactored example back to Claude along with these rules. Then apply the same transformation to remaining code. Each iteration improves the rule set.

Next Steps

Start with one technique that addresses your biggest frustration. Most developers report that planning mode, CLAUDE.md files, or aggressive context resets solve their most common problems.

Experiment with voice transcription if typing detailed prompts feels tedious. Set up automated checks if reviewing AI output takes too long. Build your own workflow by combining strategies that match how you work.