newsence
來源篩選

AI Lazyslop and Personal Responsibility

Hacker News

The article discusses the issue of 'AI Lazyslop,' where AI-generated code is submitted without proper review, highlighting the importance of personal responsibility in the age of AI-assisted development and the need for clear engagement rules.

newsence

AI的敷衍與個人責任

Hacker News
大約 1 個月前

AI 生成摘要

文章探討了「AI敷衍」的現象,即未經審查的AI生成程式碼被提交,強調在AI輔助開發時代個人責任的重要性,以及制定清晰互動規則的必要性。

   Daniel Sada Caraveo – AI Lazyslop, and Personal Responsibility – Software, Notes & Culture
Image

About

Blog

Notas

Contact

Image Image

About

Blog

Notas

Contact

AI Lazyslop, and Personal Responsibility

Once upon a time, I got a PR from a coworker, I’ll call him Mike.

Mike sent me a 1600 line pull-request with no tests, entirely written by AI, and expected me to approve it immediately as to not to block him on his deployment schedule.

When asking for tests, I’d get pushback on “why do I need tests? It works already”. Then, I’d get a ping from his manager asking on why am I blocking the review.

After I “Requested changes” he’d get frustrated that I’d do that, and put all his changes in an already approved PR and sneak merge it in another PR.

I don’t blame Mike, I blame the system that forced him to do this.

But this is the love letter I’d wish I could have written to him.

AI and personal responsibility.

Dear Mike,

I know you wrote your PR entirely using AI but you didn’t review it at all. I’m not opposed to you using AI, but what I want to know is:

Others in the industry

As we find ourselves in this new world, there is still some shame in using AI. But we shouldn’t! Ghostty is asking people to disclose the use of AI upfront and I think changes like this are positive.

Linus Torvalds recently mentioned playing and vibe-coding with AI for a language he didn’t master. We are experiencing a cultural shift in how this tool integrates in our daily lives. Even if you, personally, don’t use AI; There is a high probability that a coworker or collaborator will, and then, you have to decide the rules of engagement.

Given that, I want to define AI Lazyslop.

AI Lazyslop: AI generation that was not read by the creator, which generates a burden on the reader to review.

The anti-AI Lazyslop manifesto.

I will own my code, and the outputs of the parts of the LLM I decide to accept. I will disclose use of AI for my code.

I attest that:

What happened to Mike?

Well, I’d like to say there was a happy story, but Mike evolved to the semi-lazy-slop mode, in which he relayed all of the comments reviewers had to his PR to the LLM. Is that better? Is that worse? I’m not sure, but I’m guessing this is happening in a lot of places.

AI Disclosure

I used claude to help me “Help me review my blog post for style and grammar”. It catched the following:

Other blog posts to read!

When I write something new, sign up to hear more.

You have successfully joined my subscriber list.