Ethos
Culture-shaping mechanisms for the AI era
With every new technology we get, culture needs time to catch up. As our AI adoption grows, we have to be the responsible not only in adoption and innovation, but in culture as well.
AI made many things dramatically cheaper. Code, docs, summaries, comments, plans, emails — all of it has gotten easier and faster to create. It is now possible to appear engaged, thoughtful, strategic, and helpful in seconds — even if the real work hasn't really happened.
Here are some of my favourite quotes on the topic:
The quality of our work gets easily mistaken with the quantity of our output.
It was never easier to be mediocre, and there has never been a worse time to be mediocre.
For all the things that I know nothing about, AI looks like a godsend.If you know the sources of these, please let me know so I can credit them.
For all the things that I know something about, AI is almost useless.
So what do we do about it?
How do we stay on top of it, and how do we fight it?
There is no one single answer to this, nor will every answer we come up with work as technology evolves, but still we can isolate some overarching questions and principles that will always apply.
The Questions
How do we keep the strength of the signal when noise becomes cheap?
How many times have you encountered a document or an email that was clearly written by AI, expanded to a few pages looking more like a legal contract than a human-written letter?
How often do you encounter AI generated reports, tickets, or PR summaries and comments that take minutes to scroll through?
Same as me probably, you are 90% certain that the creator of these has not really read them, and are just pasting them in to conform to the norm, look engaged and professional. We all know AI was used to generate these, and yet we still have to read them, we still do it because that is how it was always done.
Truth is, you are often being expected to read something that the creator of the text didnt bother to read himself. Soon we will all come to the same conclusion, which is "If AI didnt help you save time, or bring clarity to the issue, dont use it!".
Before AI, to a good portion, we used to measure the performance by number of tickets solved, number of documents written, number of meetings attended, etc. It was never perfect, and looking busy was a way to cover up for not doing the work.
Using AI, looking busy has never been easier, and has never been more useless! AI enabled us to be even more counter productive.
How do we ensure quality standards remain high?
Your machine right now has more power than anything that existed in a government lab thirty years ago. Dozens of cores, absurd amounts of RAM, storage that would have looked like magic. And yet it probably takes longer to open a chat window than a 486 took to boot and let you start working.
Old software was far from perfect — it crashed, it had gaping security holes, and the interfaces were ugly. But it had one thing going for it: constraints. Memory had a budget. Startup time had a budget. CPU cycles had a budget. If your feature blew the budget, it didn't ship. You stayed late and made it smaller, faster, or simpler until it fit. There was no assumption that hardware would bail you out.
Every line of code had weight. A page fault was something you could feel. A cache miss was the difference between smooth and broken. A bad allocation in the wrong loop meant a stutter on screen or a crashed process. Developers learned to care about locality, allocation patterns, parsing things only once, and not copying data around just because the architecture diagram looked cleaner that way.
When hardware got faster, we should have kept that discipline and used the headroom to make software richer, more responsive, more delightful. Instead, we used it as permission to be wasteful.
And this is not because developers got dumber. Today's engineers deal with bigger systems, nastier threat models, more platforms, more compliance, more of everything. The problem is that our incentives got worse, our constraints got looser, and our standards quietly slid from excellent to just acceptable.
You click a simple utility and what actually launches is an auto-updater, a telemetry pipeline, a web rendering engine, a JavaScript runtime, a package ecosystem, a sandbox layer, a GPU compositor, and a framework that depends on six other frameworks — because somebody needed a fancier dropdown in 2021. Every layer made sense to someone in isolation. Together they form a performance sediment, and we stand around wondering why the app needs 800 megabytes to display three text fields and a button.
It gets worse when business incentives shift. Old software lived or died on fit and finish — if it felt laggy, people noticed. Today, software is measured by subscriptions, engagement, feature velocity, and cross-sell opportunities. Nobody gets a standing ovation for cutting startup time in half. Feature work gets promotions, performance work gets a polite nod in the retro. When nobody owns end-to-end responsiveness, the user just experiences it as "the app feels heavy" — a complaint that doesn't map to any backlog item, and therefore gets ignored for years.
Then comes AI with a can of gasoline. AI is genuinely useful — scaffolding, test generation, navigating unfamiliar APIs, getting unstuck. But it reflects the median of its training data, and median code is not lean, disciplined code. It is plausible code. Verbose code. Code that solves a stated problem in a recognizable way, but not necessarily the right way. A mediocre developer can only produce mediocre code at human speed. AI can produce it at industrial scale.
How do we grow as individuals — together?
The only way to shape culture is to discuss it with each other. We need to challenge each other, express our opinions and genuinely track the AI tools used, how they impact performance, how they impact quality, and how they impact the team culture.
We need to be honest with each other, and we need to be honest with ourselves about AI. The AI written emails and documents are rarely useful, and often harmful.
Enforcing this is not easy, and it will not be accepted by everyone. What is needed is a critical mass of people who are willing to call out each other on this. People who will not tolerate BS, and who will persistentnly call it out.
This is not about punishing people, this is about helping each other grow.
This is about being honest with each other, and being honest with ourselves about AI.
This young technology is so focused on growth and adoption, that its negative effects are often completely neglected.
For individuals, this means to that your managers will not hesitate to burn you out if this means better performance metrics for them. That means that you will be expected to deliver more, inspite of the fact of dropping quality and your own well-being. AI use, when done irresponsibly, significantly decreases the ability to think critically and independently, reduces brain activity, increases cognitive debt and can lead to a significant decrease in performance and quality of your work just to bring up the numbers.
For responsible managers, this means that you will be expected to use AI responsibly, and to help your team use AI responsibly.
Research backs this up. Studies from MIT, Wharton, and Harvard show that repeated AI offloading leads to what researchers call cognitive debt — a gradual erosion of independent reasoning. In one experiment, students who used AI tutoring performed better while the AI was available, but significantly worse once it was removed. In another, consultants using GPT-4 were 19 percentage points less likely to produce correct answers on tasks just outside AI's competence — and they couldn't tell the difference. The pattern is consistent: AI can make us feel more capable while quietly making us less so.
This is exactly why growing together matters. Individual blind spots compound in silence, but they get caught in teams that talk honestly. Share what you learn — the prompt that worked, the shortcut that backfired, the moment AI gave you a confident wrong answer. Growth in the AI era is not about keeping up individually. It is about building a shared immune system: one where we catch each other's mistakes, challenge each other's assumptions, and make sure nobody quietly drifts into dependency.
Some of the research is collected in Saine, a curated timeline of the social costs of AI. It is not there to scare anyone — it is there so we know what we are up against, and so we can grow through it together.
Design Philosophy
Make the right behavior easy and visible.
Make fake engagement slightly annoying.
Not punishable. Not monitored aggressively. Just mildly inconvenient.
Foster the right culture by rewarding the right behaviour.
Over time, these tiny nudges shape culture.
Mechanisms
Here are some of the behaviours we often encounter at work or privately, and how to address them:
PR drive-by approvals
The shortcut: I approve PRs without reading the diff and understanding the changes.
The norm: Engage in projects and codebases.
The nudge: Test your colleagues on their recently approved PRs. Engage with them, ask questions, and help them understand the changes.
Here is an example of a nudge, a PR quiz - 3 questions making sure the approver actually read the code.
Tidewave PR Quiz
This might be an extreme example, but it is also a good example of a nudge that is not too intrusive, but still effective.
Vibe coding
The shortcut: I push AI code and hope CI catches it.
The norm: My slop code is our bug. Be respectful to your colleagues.
The nudge: Read and follow through your code. Check the tests and coverage.
Prompt again with these questions:
"In which case will this solution not work?"
"What is the worst case scenario for this solution?"
"What is the best case scenario for this solution?"
"What is the edge case for this solution?"
Hallucinated AI certainty
The shortcut: I present AI guesses as facts.
The norm: Evidence before opinion — link one artifact before strong claims.
The nudge: Tell your AI companion it is wrong after the final solution. Share its sycophancy with the team, and challenge it before concluding it is right.
AI should be used as a tool to help you, not as a replacement for your own thinking.
Treat AI outputs as drafts, and not as final solutions.
Request the chatbot to cite its sources, and ask it to explain its reasoning.
Compare results for the same query across different AI tools to check for consistency.
Instruct the model not to make things up. Use phrases like, "Only use the provided information," or "If you do not know the answer, say 'I don't know,' do not make it up".
Admitting AI mistakes
The shortcut: Admitting AI introduced mistakes makes me look bad.
The nudge: Every retro or weekly or bi-weekly team meeting pose the same question:
"Share an example of something you could have done better with AI."
"Share an example where AI failed you."
AI Status theater
The shortcut: It's never been easier than now with AI — I can increase my visibility in the company with little to no effort!
The norm: Don't measure complexity in the number of tickets or meetings done.
The nudge: Challenge and confront no use meetings and slack updates.
Speak up when you see a beautiful but useless graph or report.
Speak up when you see AI generated answers and status updates.
AI generated docs
The shortcut: I publish plausible but outdated or just plain wrong docs.
The norm: Document work so someone new has a starting point to get up to speed.
The nudge: It has never been easier to generate documentation that is just plain wrong.
Wrong documentation is worse than no documentation
It is a task that should be thorougly reviewed by a human, not by AI.
Regularly run an AI prompt to check for inconsistencies and outdated info.
Hidden AI work
The shortcut: I present AI output as if it were pure thinking.
The norm: Show your prompt or workflow when sharing AI-generated work.
The nudge: Share the chat history or the prompt when sharing AI-generated or AI-assisted work.
Transparency is crucial for trust and accountability.
There are a number of services that allow you to share the prompt or workflow when sharing AI-generated or AI-assisted work.
Test https://www.prompthub.us or https://promptdrive.ai or create or find your own service to share the prompts or workflows.
AI Test theater
The shortcut: I add many AI generated shallow tests.
The norm: Where possible, one unit changed, one test needs updating.
The nudge: Prompt for overlaps in your tests and how best to restructure them.
Check for consistency and accuracy of the tests.
Check for coverage of the tests.
Check for edge cases of the tests.
Check for performance of the tests.
Skipping retrospection
The shortcut: I don't look back on work after it looks done.
The norm: Planning is never perfect. Requirements change, constraints grow.
The nudge: Stop for a minute every sprint or big chunk of work to gather and present learnings and plan for improvements.
Ask the AI to help you with the retrospection.
What went wrong, how to plan for it better, and how to improve the process.
AI Weekly
The shortcut: I learn something useful with AI but keep it to myself.
The norm: Share what you learn — breakthroughs, dead ends, and everything in between.
The nudge: A short weekly meeting where everyone shares one AI learning from the past week: a prompt that worked, a workflow that saved time, a bottleneck that slowed you down, or a tool that surprised you.
No slides, no prep — just honest exchange. The goal is collective acceleration, not individual hoarding.
Useless AI summaries
The shortcut: I use AI meeting summaries although they are almost useless.
The norm: If we're not using it, or it's proving useless, kill it or figure out how to improve it.
The nudge: Test the summaries for their value, if they are useless, disable them.
What This Is Not
These systems create resentment, gaming, and distrust — and they rarely work. If we build any mechanism, it must be minimally invasive, transparent, respectful of trust, and ideally time-saving.
The Better Approach
The only way to change culture is to change the behaviour.
To change behaviour, we need to work together and put in conscious effort.
We can do this by introducing tiny friction in the right places. Not punishment. Just gentle nudges. Small tools. Simple rituals. Clear norms. Clear expectations. Human judgment.
All reinforcing one idea:
We have the technology, now we need to build the culture.
The Future We Want
The best outcome of the AI era is not producing more content. It is producing more meaningful content.
More real thinking. More real collaboration. More real progress.
AI can help us do that. But only if we design our culture carefully.
It has never been easier to produce convincing-looking nonsense. Let's build the culture that doesn't.