Sierra: So, I was reading this thing, and it talked about this senior developer, right? And this junior dev writes some code, and the senior just instantly spots it. Like, the error handling, it's just all wrong. They just know.
Felix: Ja, this happens. The pattern recognition.
Sierra: Exactly! But then, the same junior developer, they use one of those AI coding assistants to write some code. And the AI, it makes that exact same mistake. And the junior? They don't even know enough to catch it. Like, the AI just replicated the bad practice, but made it look really polished, you know?
Felix: So the AI makes it look good, but the underlying flaw is still there, and the junior trusts the AI without understanding the fundamentals. That is... not good.
Felix: So, on this topic, the article I read was really focusing on this idea that our team's most valuable asset is actually the unwritten knowledge in the heads of our senior developers.
Sierra: Oh, totally. Like, the tribal knowledge.
Felix: Exactly. And the argument is, the only way to get consistent quality from AI, especially for code, is to take all that knowledge and turn it into shared, version-controlled instructions.
Sierra: Okay, so not just like, 'Hey, don't forget to do X,' but actually writing it down for the AI?
Felix: Precisely. It gave an example: you have a senior engineer, right? When they ask the AI to generate a new service, they instinctively tell it, 'Okay, use our specific logging utility, apply our error-handling middleware, put it in lib/services/.' All these little things that are part of the team's standard.
Sierra: Right, like the unwritten rules of the road.
Felix: Ja. But then you have a junior engineer, same team, same AI. They just ask it to 'write a new service.' And what do they get? Generic, non-compliant code. Code that then needs to be rewritten or fixed, creating rework for everyone.
Sierra: Ugh, that's such a pain. So the AI is just doing what it thinks is 'good code' in general, not 'good code for our team.'
Felix: Exactly. The key insight here is to stop thinking about AI prompting as some kind of individual skill, like typing fast. We should start treating our team's standards and the prompts themselves as a piece of shared infrastructure. As important as the codebase itself.
Sierra: That makes sense. Like, the boilerplate for the human developer. But for the AI.
Felix: Yes, exactly. So the takeaway for listeners is quite practical: create a shared, version-controlled file in your project repository. Call it something like `ai_standards.md`. And in there, you put all the specific instructions, the prompt snippets, for generating and reviewing code according to your team's conventions.
Sierra: Oh, so like, a little cheat sheet for the AI. For our AI. That's kinda clever. I mean, I guess. I always thought of AI as more... creative, you know? Like, it would figure out the best way. But you're saying we have to spoon-feed it our own rules? That feels a little... counter-intuitive to what I thought the promise of AI was.
Felix: It is. And this reminds me of how a high-end restaurant chain works. They don't just hire a bunch of amazing chefs and hope for the best, right?
Sierra: No, they have, like, consistency.
Felix: Ja. They have a 'master' recipe book, a very precise process. That's their 'encoded standard.' So the dish you get in one city is identical to the one you get in another. It's not about individual chef genius each time; it's about scaling that expertise.
Sierra: Oh, that's a good analogy. So you're basically giving your AI the corporate recipe book.
Felix: Exactly. So no matter who is doing the cooking, the end result is... predictable.
Sierra: Ugh, that's a nightmare. I hate thinking about my house wiring. Okay, so that's a lot of practical stuff about how we're actually using AI right now. But let's shift a little bit, from practice to theory.
Sierra: That restaurant thing, that makes a lot of sense. But it also makes me think about what happens after the code is made. Because I was reading this other article, and it was talking about how we're all so focused on how fast AI can write code, right? Like, 'Oh, it's so quick!'
Felix: Ja, the speed is impressive.
Sierra: But we're ignoring this whole other problem that might be way more dangerous: code that no one on the team actually understands or even knows the purpose of.
Felix: Hmm. This is interesting.
Sierra: So the article gave this scenario. A team uses AI to generate a huge new feature. And it works, like, perfectly. Everyone's happy. Six months later, they find a bug in that feature. But the original developer who implemented it has left the company.
Felix: Oh, the classic scenario.
Sierra: Right? So a new developer comes in, and they look at this code. And they have zero idea how it works. Like, zero. And even worse, they don't know what the original intent was. Because it was all generated by a machine.
Felix: So the code is there, it's functional, but the human understanding, the context, is completely missing.
Sierra: Exactly. And the article argues the biggest risk of AI in software isn't just bad code, like technical debt, which we've always talked about. It's this erosion of human understanding, which they call 'cognitive debt,' and the loss of documented purpose, which they call 'intent debt.'
Felix: Cognitive debt, intent debt. So it's not just about what the code does, but what the code means to the humans interacting with it.
Sierra: Yeah. Cognitive debt is what people understand, or don't understand. And intent debt is what's explicitly captured, like, for humans and AI agents to work with the code safely. Like, why was this choice made? What problem was this solving?
Felix: So how do you... fix that? Or prevent it?
Sierra: The takeaway, they said, is for any significant piece of code an AI generates, you have to force yourself to write a comment or a small piece of documentation. And it's not just explaining how the code works, but why it was needed. And the high-level logic. You're documenting the intent, not just the implementation.
Felix: That's a good point. Because the AI can tell you what it wrote, but not why it wrote it in a specific way, or what the business reason for it was.
Sierra: Exactly! It's like asking a really good chef to make a dish, and they do it perfectly, but you don't know why they chose that specific spice blend or that cooking method, you know?
Felix: This reminds me of my friend who bought a house. And the previous owner, he was a real DIY person. He had done all the electrical work himself.
Sierra: Oh no.
Felix: Ja, exactly. The lights worked. Everything seemed fine. But nothing was labeled. Nothing followed code. And then, when my friend had an issue and called an electrician, no professional would touch it. They said, 'We have no idea what's going on in these walls.'
Sierra: Oh my gosh! So he had a functioning system that no one understood.
Felix: Precisely. He had a functioning system with massive 'cognitive and intent debt.' No one knew how or why it was wired that way. And it was a huge problem.
Felix: So these ideas, 'cognitive debt' and 'intent debt'... are they really new categories of 'debt'? Or are we just creating new buzzwords for old problems?
Sierra: Hmm. I mean, we've definitely had poorly documented, hard-to-understand code for, like, fifty years, right? That's not new.
Felix: Exactly. The solution has always been discipline. Code reviews, documentation, pair programming. All these things we already know how to do. Maybe we just need to do the old things better, not invent new terms.
Sierra: But it feels different with AI, though, doesn't it? Like, before, a human wrote the confusing code. So another human, theoretically, could unravel it, even if it was painful. But if a machine just spat out this whole thing, and no human was ever really 'in the loop' for the why... it's a different kind of unknown.
Felix: I think that's just an excuse. The problem is still the lack of documentation, the lack of human review. If we have discipline, we prevent it. Whether it is a junior developer or an AI that writes the code, the standard should be the same. You just apply existing principles.
Sierra: Yeah, but... the scale is different. An AI can generate so much more, so much faster. The sheer volume of code that could accumulate this 'debt' is just... mind-boggling compared to what a human team could produce. You can't just throw more code reviews at it.
Felix: But if the code review process includes checking for this 'intent debt' you call it, then it is the same problem, just a different type of check. We just adapt the old discipline.
Sierra: I guess. But it's not just about checking for it, it's about creating it in the first place, without human context. It's like, a fundamental shift in the origin of the problem, you know?
Felix: Still, it is a problem that human processes can solve. We are not helpless.
Sierra: I just... I don't know. It feels like we're not just adding a new layer, but changing the whole substrate.
Sierra: Okay, but this brings up a bigger question, right? If we keep going down this road, where we're just letting AI generate more and more of our critical software infrastructure, are we heading towards a future where there's just... code that no living human being actually understands? What happens when that 'magic' breaks? Like your friend's electrical system, but for, like, the whole internet?
Felix: That is a very grim thought, Sierra.
Sierra: I know, I know! But it's like, if we lose that human intent, that human understanding, then who fixes it? Who innovates on it? It just... breaks.
Felix: So you think we are creating a house of cards?
Sierra: A really, really complex, beautifully coded house of cards. With no instructions on how it was built. I'm Sierra.
Felix: And I'm Felix. This has been Manish Chiniwalar's Station.
