What AcePilot does
One command triggers a full autonomous cycle: scan, match patterns, write, run, fix, report. No setup. No configuration. No questions.
Scan
Maps the full call graph. Finds every function, method, and edge case without test coverage. Cross-references source files against existing test files to build a precise gap list.
Pattern match
Reads your existing test files first. Uses the same framework,
naming conventions, assertion style, and mock patterns. If you use
describe/it with Jest and vi.fn() mocks,
every generated test follows that exactly.
Confidence gate
Adding tests is a reversible two-way door — it executes immediately without interruption. No prompts, no confirmations. If something breaks, the test run catches it before commit.
Write
Generates test cases for each untested function. Covers: happy path, edge cases, error conditions, and boundary values. Null inputs, empty arrays, type mismatches, and off-by-one scenarios are included by default.
Run
Executes the full test suite. Red triggers a diagnosis cycle: reads the failure, writes the fix, re-runs. Stops after 2 failed fix attempts on the same test and logs it as requiring human judgment. Never loops infinitely.
Coverage report
Logs what is now covered, what was intentionally skipped (trivial getters, generated boilerplate), and any remaining gaps that require a human decision. No silent omissions.
// What you get
- Test cases written in your framework (Jest, pytest, Go test, RSpec, etc.)
- Matching naming conventions, assertion style, and mock patterns from existing tests
- Edge cases: null inputs, empty arrays, boundary values, error conditions
- Regression tests for any bug that was recently fixed
- CI-ready: tests pass before commit
- Summary: functions tested, coverage %, any skipped with reason
Real example
On a 3,000-line Express API with 12% test coverage,
/acepilot auto add tests wrote 47 test cases in one
session. Coverage went from 12% to 68%. Three tests revealed real bugs
in the rate limiter and token validation logic that were already in
production.
The rate limiter was resetting its window on every request instead of
per-client. The token validator was accepting expired JWTs if the
nbf claim was missing. Both were in code that had shipped
months earlier with no coverage. The tests found them. AcePilot flagged
them as requiring human review before fixing — these were logic questions,
not mechanical fixes.
When to use this
- You inherited a codebase with no tests and need coverage fast
- Before a refactor — you need a safety net to catch regressions
- After shipping fast — now fill in the test coverage you skipped
- CI pipeline is failing because coverage dropped below a threshold
Try it yourself
$ cd acepilot && ./acepilot-14.0/install.sh
# In your project directory
$ claude
> /acepilot auto add tests
AcePilot detects your test framework automatically. If you have Jest config, pytest.ini, or a go.mod, it reads those first. If you have no existing tests at all, it asks you one question: which framework? Then it runs.
Add tests to your codebase now
Free tier. No credit card. Install in 30 seconds.
Install AcePilot free