newsence
來源篩選

A Measured Approach to AI in Software Development: Monarch's Philosophy

Hacker News

This article outlines Monarch Money's internal philosophy on adopting AI in software development, advocating for a 'step behind the bleeding edge' approach to ensure maturity and battle-testing of new technologies while still understanding the frontier.

newsence

軟體開發中的AI審慎之道:Monarch的哲學

Hacker News
大約 1 個月前

AI 生成摘要

本文闡述了Monarch Money在軟體開發中採用AI的內部哲學,主張採取「領先一步但非最前沿」的策略,以確保新技術的成熟度和實戰考驗,同時仍能掌握最新發展。

A Step Behind the Bleeding Edge: Monarch’s Philosophy on AI in Dev – Somehow Manage

Image

Somehow Manage

Hi! I'm Ozzie. I'm an engineering manager / product builder. I am co-founder at Monarch Money and lead author of the Holloway Guide to Technical Recruiting and Hiring.

A Step Behind the Bleeding Edge: Monarch’s Philosophy on AI in Dev

This is a memo I published internally to my team at Monarch. I’m sharing it more publicly in case it helps other software engineering teams that are managing the crazy times we’re experiencing.

There’s no question: AI is changing how we work as Software Engineers. There’s a lot of hype, excitement, anxiety, and uncertainty around these changes.

As an Engineering org, we’ve had a strong set of Engineering Values (How We Work Together) that have served us really well as we’ve grown. I wanted to drop a few thoughts on our philosophy on AI in Engineering, grounded in these values. For more details, you can see our AI in Engineering@Monarch [internal, redacted link] doc.

Here is my ask of our team as we explore and implement AI in Engineering:

Understand and explore the bleeding edge, but adopt a dampened one

We definitely believe in and want to leverage AI in our work to increase productivity and quality. That said, if we try to always be on the bleeding edge, we will suffer from:

So as an org, we may feel one step behind the bleeding edge, only adopting things once they are a bit more mature and battle-tested (”a step behind the blood”).

That said, to know we are (only) a step behind, we must still understand the frontier. To do this, we will:

We need to understand the bleeding edge, but work at a step behind it.

Continue to own your work

Whether you use AI or not, if work has your name on it, you are accountable for it.

That means that you are responsible for the quality of the written documents or code that you put out. You should review everything before you ask others to take a look.

Likewise, work we put out collectively to our users has our company’s name on it and we are collectively accountable for it (its functionality, its quality, its security, etc). AI has no accountability, no pride in it’s craft, no shame if it gets things wrong. The human (that’s you) provides the accountability.

It’s much easier to generate code or documents, but if you generate a lot and don’t control for quality, you are shifting the burden onto your peers (who will review your work), or worse, our users (if it doesn’t get properly reviewed and tested).

As a side note, even teams at frontier AI labs don’t blindly trust their AI. When we’ve asked friends there about how they use their own tech, they have said there is always human review. Apparently, claims of otherwise are probably just one-offs (ie, prototypes or non-critical systems) or just plain hype.

Do the deep thinking yourself (don’t get l-ai-zy)

Andy Grove argued that often, writing a deep report is more important than reading it: “Their (ie the document’s) value stems from the discipline and the thinking the writer is forced to impose upon himself as [she] identifies and deals with trouble spots”.

If you ask AI to write a document for you, you might get 80% of the deep quality you’d get if you wrote it yourself for 5% of the effort. But, now you’ve also only done 5% of the thinking. Delegate things that require time and toil to AI, but keep things that require thought, judgment, and rigor for yourself.

You can still use AI as a thought-partner, idea generator, editor, or synthesizer. You can (should) also use AI for toil (things that are time-consuming, repetitive, and menial). But you need to do the deep thinking yourself.

Continue to leave room for inspiration

When we wrote our Engineering values, and included “leave room for inspiration”, one thing we were guarding against was working so hard with so little slack that we don’t have room for inspiration, creativity and brilliance. AI changes that risk profile. With AI and increased productivity, you might have more time and slack, but, if you’re delegating too much to AI, you may not have the deep thought, context, and connectedness to the code and product that is required for inspiration.

People often worry about AI slop, but if you’re owning your work and reviewing it (as requested above), you will catch bad things and ideas that look bad things. You’ll need to be more careful about catching bad ideas that look like great ideas (since generative AI is notorious for producing those), but again, if you’re owning your work, you should catch those, too.

I’m most worried about missing good ideas that sound like bad ideas (at first)—in other words, sins of omission. Those will never occur unless you own your work, do the deep thought—and create space for inspiration.

Carefully design validation/verification loops

We strongly believe in systems thinking, and one of the most important parts of systems thinking is feedback loops. When using AI, think about feedback and validation loops:

In other words, design that system (you + AI), will figuring out your role in it, since you will ultimately own the output.

Use AI more liberally in safe settings

We’ve found that there are a couple areas where using AI more liberally (that is, more autonomous agents, less human-in-the-loop, etc) makes a lot of sense, and we recommend you use these in your workflow:

Each of these may require more thought, polish, or verification later, but in the early stages, they can be great areas to “build-then-think” (rather than “think-then-build”).

Frequently Asked Questions

Will AI replace my job?

If you consider your job to be “typing code into an editor”, AI will replace it (in some senses, it already has). On the other hand, if you consider your job to be “to use software to build products and/or solve problems”, your job is just going to change and get more interesting.

There is a lot that goes into building great software that AI isn’t going to replace (at least, any time soon). How we work will change, and we should be able to build faster and with better quality.

Am I falling behind if I’m not using AI constantly?

We know it can be stressful to feel like you’re not keeping up, but on the other hand, if we don’t change how we work at all, we will eventually fall behind. This has always been the case in software development, but things are moving a lot faster now.

That said, constantly worrying about falling behind only creates anxiety. Our philosophy (as described above) is to collectively explore the bleeding edge, but work an inch or two behind it. We also will walk that path together, so that no one feels like they are being left behind. You are expected to contribute to exploration and sharing learnings, but you aren’t expected to figure out our full strategy on how we use AI on your own.

Is the code AI writes actually good?

You should be the judge. With the right context and the right prompting, we’ve found that AI can write good code (at minimum, consistent with the code base it’s operating in). But since you’ll also be reviewing the code, you can and should decide when it has written good code or not.

Am I losing skills by relying on AI?

It depends on how you use it. If you abdicate your responsibility as a developer to AI, yes, your skills may atrophy. But if you do the deep work, and review/validate AI’s work, your skills shouldn’t atrophy. In fact, they should improve, since you’ll constantly and instantly have access to a somewhat knowledgable

Share this:

Related

Post navigation

Leave a comment Cancel reply

Δ
/* */

Image Image