newsence
來源篩選

Show HN: I wrote an "AI Lint" doctrine file for my agents to instill senior engineering judgment

Hacker News

A developer introduces 'AI Lint,' a doctrine file designed to guide AI agents in producing code that aligns with architectural principles and long-term system shape, going beyond mere syntactic correctness to address the 'does it belong?' question.

newsence

Show HN:我為我的 AI 代理編寫了「AI Lint」原則文件,以灌輸資深工程師的判斷力

Hacker News
大約 1 個月前

AI 生成摘要

一位開發者推出了「AI Lint」,這是一個原則文件,旨在指導 AI 代理生成符合架構原則和長期系統形態的程式碼,超越了單純的語法正確性,解決了「它是否合適?」的問題。

AI Lint — Teach your AI agents what belongs

Teach your AI the difference between

          "works" and          "belongs."

AI already writes working code. The pain starts when you ship it:
abstractions feel wrong, complexity gets hidden, reviewers rewrite half.

          AI Lint externalizes senior engineering judgment—so agents          build code that fits the language, the framework, and the long-term shape of your system.

Not a linter. Not a style guide. If a machine can enforce it mechanically, it doesn't belong here.

AI is powerful, but sloppy in taste.

The problem isn't syntax.

AI models produce code that compiles. Then you review it and feel the familiar dread:
it's not wrong enough to reject quickly—just wrong enough to quietly poison the codebase.

What you see

"Works" code that fights the language, invents needless abstractions, hides state,
and turns maintainability into a slow leak.

What's actually happening

The AI isn't failing at syntax. It's failing at judgment:
what patterns belong, what tradeoffs are acceptable, what risks must be surfaced.

Rule of thumb: If a rule could be handled by ESLint, ShellCheck, a type checker,
or static analysis, it does not belong in AI Lint. AI Lint is for the hard part: "does this belong?"

How it works

Drop in the doctrine. Wire your agent once. From then on, the agent consults AI Lint
for architectural and taste decisions—and pauses when a human should choose.

AI Lint ships as additive packs. Overlay App and Systems packs as needed.

Copy the included prompts into AGENTS.md / CLAUDE.md / COPILOT instructions.

When making design decisions, the agent treats AI Lint as the authority.

If a choice violates doctrine, the agent surfaces it and asks for an override.

"I can implement that pattern, but it violates AI Lint Doctrine #4 (No Invisible State).

        It would make debugging UserSession nearly impossible later.         Shall we use a dependency injection pattern instead?"

What’s in each pack

Each pack is a curated body of doctrine, rejects, and exhibits.
This is not reference material or best-practice lists —
it’s encoded judgment about what belongs.

Apps Pack

Doctrine for application codebases where correctness isn’t the problem —
long-term shape is.

Languages

JavaScript · Node.js · Python · Java

Frameworks

Django · Spring

Focuses on hidden state, abstraction boundaries, dependency direction,
lifecycle clarity, and “works but fights the framework” patterns.

Systems Pack

Doctrine for low-level and performance-sensitive code where mistakes
stay latent and surface later.

Languages

C · C++ · Go · Rust · Assembly

Emphasizes causality, memory ownership, concurrency discipline,
ABI hygiene, and “correct but haunted” constructs.

Bundle (Apps + Systems)

Everything in Apps and Systems.
No extra doctrine — just the complete surface area.

Apps Pack + Systems Pack

Buy this if you work across layers, review agent output broadly,
or don’t want to think about coverage.

What you’re actually buying:
doctrine (what belongs), rejects (what looks fine but isn’t),
and exhibits (why the rule exists) — written so an agent can reason,
not just comply.

Packs

The free edition explains the frame and wiring. Paid packs provide deep doctrine,
rejects, and exhibits for real languages and frameworks.

Free Edition

Non-commercial; no redistribution; no model training.

Personal License

Personal (Systems) — $49
 • 
Personal (Bundle) — $79

Company License

Company (Systems) — $299
 • 
Company (Bundle) — $499

Enterprise

If your org chart is complicated, this is the clean path.

Roadmap

We're building doctrine for the domains where AI judgment failures hurt the most.
New packs ship as overlay ZIPs. Buy only what you need. Bundles available.

SQL, PostgreSQL, indexes, N+1, migrations, transaction isolation

Kubernetes, Docker, Helm, resource limits, health checks, secrets

REST, versioning, pagination, error formats, idempotency keys

JWT, sessions, OAuth, RBAC, token rotation, confused deputy

Sagas, idempotency, exactly-once, circuit breakers, retry policies

Logs, metrics, traces, cardinality, structured events, correlation

Swift, Kotlin, React Native, lifecycle, memory, offline-first

React, state management, component boundaries, accessibility

Want a specific pack prioritized? Let us know.

Built from experience, not theory

AI Lint was built after years of using AI to ship paid products—inside legacy codebases
and production systems. It's the doctrine I wanted my agents to follow—and couldn't find.

What AI Lint optimizes for

Clarity over cleverness. Visible complexity over hidden magic. Causality you can debug.
Explicit risk when you decide to break a rule.

What it avoids

Style policing. Mechanical "best practices." Boilerplate. Anything that a linter,
formatter, or static analyzer can enforce.

FAQ

No. It's doctrine for judgment—semantics, architecture, and taste. If a machine can enforce it mechanically, it doesn't belong.

Unzip ai-lint-core into your repo. Then unzip packs (App, Systems) over it. Files merge by directory.

AI Lint expects conflicts. The override protocol makes tradeoffs explicit. A good agent surfaces the conflict and asks for a decision.

Yes (internal use). The whole point is to encode your standards. The paid license permits internal modification and use in private repos.

No. New packs are sold separately as overlay ZIPs. The bundle is Apps + Systems.

Yes. If you want to encode hard-won knowledge into doctrine packs (languages, frameworks, security), email us. This becomes a virtuous cycle.

Write doctrine. Get paid.

We're looking for engineers who've shipped in production and have opinions about what belongs.
If you've debugged the same AI mistakes repeatedly—in React, Kubernetes, PostgreSQL, or anywhere else—that's doctrine waiting to be written.

What we're looking for

Real scars. Patterns you've seen break in production. Framework fights you've lost.
The stuff that's obvious to you but invisible to AI.

What you get

Revenue share on packs you author. Your name on the doctrine.
A way to encode your judgment into something that scales.