The Wrong Efficiency: What AI Productivity Numbers Aren't Telling You

What 75% AI-generated code teaches nonprofit organizations about AI-generated anything

There are a couple of numbers making the rounds in tech right now that are getting a lot of attention for the wrong reasons.

Google announced that 75% of its new code is now AI-generated, reviewed by engineers but written by machines. That number was 25% eighteen months ago. Snap hit 65% and simultaneously laid off 1,000 employees, 16% of its global workforce, with CEO Evan Spiegel citing AI efficiencies in the memo he sent to staff. Snap's stock jumped 7% the same day.

The coverage has been relentless, but as is often the case with the mainstream media when it comes to AI, commentary has mostly treated these numbers as something to either sprint toward or panic about (or both). 

Both reactions miss the point entirely. The truth is, most organizations are too focused on these numbers and, as a result, are not asking the right questions. 

The question everyone is skipping

The first question is usually, "How much of our work is AI doing?" Most organizations stop there.

The second question is harder to get a handle on: "Which work, specifically, is AI doing?"

At Google and Snap, AI is busy generating code, which is fine. The efficiency gain is real. But the work being automated is also some of the most trainable, most learnable, most foundational work in the organization. Junior developer employment for workers aged 22 to 25 has already dropped nearly 20% since 2022. The pipeline of people who learn by doing is narrowing in real time.

So is this an efficiency, or is it actually a trade? I think it’s the latter, just that you are not seeing the real cost (the ratio of trained and experienced engineers to output) on the same spreadsheet as the savings.

Introducing the Jester AI Adoption Diagnostic

Before you approve any AI adoption pitch, in any department, at any scale, run it through these four questions to see how your AI adoption strategies stack up. 

Question 1: What, specifically, is being made faster?

Not "what department." Not "what category of task." Get granular. If the answer is "content production," ask what kind, for which audiences, and reviewed by whom. 

AI moves fast. Sometimes too fast. Organizations rush to automate before they can articulate what, exactly, they are automating. The result is efficiency that looks good in a board deck and feels wrong six months later. Sure, headcount has been reduced, and output has been maintained or even increased, but nobody can quite explain why the grant application sounds slightly off or why the donor newsletter no longer sounds like anyone they know.

Vague efficiency is almost always someone else's shortcut landing in your lap.

Question 2: Who was learning by doing that work?

Every organization has work that looks like output but functions as training. A junior communications coordinator drafting your newsletter is not just producing a newsletter. She is learning your voice, your audiences, your instincts about what lands. If AI takes that task before she develops those instincts, you have not saved time. You have interrupted a learning and development cycle you will feel in three years.

At scale, this is how you build a workforce that can supervise AI but cannot evaluate it. That gap compounds quietly until it becomes a crisis loudly.

Question 3: Who is accountable for the output, and do they have enough context to catch what is wrong?

At Google, engineers review AI-generated code before it ships. That review layer is doing real work. It requires expertise, judgment, and domain knowledge. The moment your review layer becomes rubber-stamping, your AI efficiency becomes a big old AI liability.

Ask it plainly: does the person approving the output actually understand it well enough to catch a confident, well-formatted inaccuracy? Do they have the experience to know if the output is aligned with the brand voice or values of the organization? If the answer is unclear, your efficiency is fragile.

Question 4: How will you know if this is working?

This is the question that separates a strategy from a hope. Efficiency is measurable. So is quality degradation. So is staff capability drift over time. If your AI adoption plan does not include a way to measure what you are gaining and what you might be losing, you are not managing a transition. You are just moving fast and calling it progress.

Define the signal before you need it, not after.

These four questions form what we call the Jester AI Adoption Diagnostic. It is the first tool in a broader framework we are developing to help mission-driven organizations build AI strategies that are sustainable, not just fast. More on that soon.

What this means if you are not a tech company

People in nonprofits, post-secondary institutions, and public sector organizations: if you are reading this and wondering whether it applies to you, it does, just differently.

Your 65% is not code. It might be internal reports, correspondence, funding applications, donor communications, or policy summaries. The instinct to automate those things is reasonable. The risk is identical.

The work being made "efficient" often carries the institutional memory, the relationship nuance, and the hard-earned voice that makes your communications actually land. AI can produce a version of it quickly. The question is whether anyone in your organization still knows enough to tell the difference between a good version and a fast one.

The actual lesson from Google and Snap

The headline is that AI is writing most of their code. The lesson is that efficiency at scale requires a sophisticated review layer, real investment in the people doing the reviewing, and a clear-eyed understanding of what you are trading when you automate foundational work.

Snap laid off 16% of its workforce the same week it announced its AI efficiency numbers. The stock went up. That may be the right call for Snap's business model. The question for your organization is whether your version of that trade serves your mission, your stakeholders, and your long-term capacity.

Efficiency is a tool. It is not a strategy.

Before you run toward the number, run the diagnostic. Which work is being made efficient? Who is learning from it? Can the person approving the output actually tell if it is wrong? How will you measure success? 

That is the difference between truly adopting AI and just adopting efficiency and speed.

-

Jester helps organizations build AI strategies grounded in capacity, not just efficiency. Start the conversation.

Susan Murphy

Co-Founder, Jester •Veteran Communicator • Not-for-Profit, B2B & Public Sector Strategist • Digital Media & AI Consultant

Next
Next

How Do I Look? 4 Reasons Brands Should Take Style Seriously