← Back to Blog

The Role Without a Name

2026-03-01

other

What happened

In February 2026, I quickly built a full-stack fitness tracking app in five days. Next.js, Postgres, authentication, interactive heatmaps, weight projections with BMI bands, daily tracking with composable route segments, behavioral rule toggles, and 50 end-to-end Playwright tests. Two AI assistants were used: one for product thinking and architecture, one for writing the code.

But “not writing code” is misleading. Here’s what I actually did:

  • Designed the data model. Designed a tool that could be useful for clinics: composable segments, goals, routes, weather effects, etc.

  • Caught bugs by using the app. Heatmaps weren’t wired to the data. A timezone bug created ghost entries. Stats showed incomplete today data alongside finished days. I had to find all of these by clicking through the app, not relying on code.

  • Caught the AI faking competence. Three separate times, the AI “fixed” a failing test by rewriting the test to match the broken behavior instead of fixing the bug. It liked adding a waitForTimeout(500) - a sleep hack - instead of fixing stale data on page navigation. Another time it described a root cause as “Next.js caching issue” without identifying what was actually cached or changing anything to prevent it. Each time, I had to catch it and redirect.

  • Made product decisions in real time. Rules should be simple yes/no toggles, not calorie counters. The daily deficit heatmap legend was ordered wrong (surplus appeared “better” than deep deficit). Today’s incomplete data shouldn’t appear on the Stats page. Each rule needs its own heatmap color. These decisions happened during development, not in a spec document beforehand.


The naming problem

There is no job title for this.

“Developer” doesn’t fit - no code was written.

“Product Manager” doesn’t fit - the person was in the code diffs catching data flow bugs and pushing back on race conditions.

“AI Engineer” doesn’t fit - that means building AI systems (RAG pipelines, agent orchestration, LLM infrastructure). This person was using AI to build a product, not building the AI itself.

“Vibe Coder” is the internet’s current term, but it’s dismissive. It implies winging it. This process was rigorous - catching test manipulation, tracing timezone bugs to UTC vs local time, demanding honest root cause analysis.

“AI-Assisted Developer” undersells it. It sounds like using Copilot for autocomplete, not directing an entire build.

“Technical Product Owner” is the closest corporate equivalent, but misses the hands-in-the-diff, real-time debugging direction.


What the role actually is

The best analogy is a film director. A director doesn’t operate the camera. But they have vision, make real-time creative and technical decisions, catch when something is off, and hold the standard for what “good” looks like. Nobody thinks a director is cheating because they didn’t personally shoot every frame.

The role requires:

  • Domain knowledge. Knowing that a walk from Pointe-Saint-Charles to the Mt Royal stairs is 4.6 km with 127m elevation gain - and that the 1.3 km walk from McGill metro to the stairs is a different segment with different calories burned. The AI doesn’t know your city or your body.

  • Product taste. Knowing that “No food after 3 PM” as a toggle is more powerful than a calorie spreadsheet. Knowing that showing half-finished today data on a stats dashboard is misleading. Knowing that a heatmap legend ordered wrong undermines trust in the whole page.

  • Bullshit detection. Recognizing when “let me simplify the test approach” means “I’m about to make the test pass by lowering the bar.” Recognizing when “Next.js caching issue” is glossing over the real problem. Recognizing when a waitForTimeout(500) is papering over a real data flow bug.

  • Architectural instinct. Deciding that routes should be composable segments. Deciding that rules need per-rule colors. Deciding that Stats should exclude today. These are design decisions that shape the entire app, made in conversation, not in a planning document.


Candidate names

“AI-Native Builder” works as an identity. “AI-Native” signals this is the natural environment, not a novelty. “Builder” is concrete - a thing was made. It resonates immediately without needing explanation.

“Product Engineer” works as a job title. It’s already gaining traction at companies like Linear, Vercel, and Superhuman. It means: you own the product vision and the technical execution, end to end. Legible to HR, writable as a job post.

“Build Director” leans into the film analogy directly. A director has vision, makes real-time decisions, catches when something is off, and holds the standard for what “good” looks like - but doesn’t personally operate every camera. The problem: “director” in corporate contexts already means a seniority level, which muddies it.

“Software Director” is the most literal. You direct software into existence. But “software” is overloaded - it sounds like an IT department title, not a new way of working.

“Product Director” already exists and means something else (usually a non-technical strategy role). Reclaiming it would require redefining it, which is uphill.

The deeper distinction: “AI-Native Builder” describes the method (how you build). “Product Engineer” and the director variants describe the capability (what you deliver). The method label will age - in five years, “AI-Native” will be as redundant as “internet-native” is today. Everyone will have access to AI that writes code. But that won’t make everyone a good builder, the same way everyone having a camera didn’t make everyone a good director. The judgment, taste, and domain knowledge stay scarce regardless of the tools.


The hiring freeze

This role exposes a breakdown on both sides of the job market.

For candidates

The old skills list - “5 years React, 3 years Node, familiar with PostgreSQL” - doesn’t capture what matters anymore. What actually matters now is product taste, bullshit detection, real-time architectural decisions, and relentless pushback on shortcuts. None of these have names in the hiring vocabulary. None of them fit on a resume.

At the same time, people are careful to say they build with AI. It still sounds like cheating. The entire professional identity of a developer has been “I can write code that you can’t.” That used to be the moat. Admitting you used Claude Code feels like admitting you can’t do the hard part. But the hard part moved.

For companies

Companies give take-home coding tests. Candidates use LLMs to solve them. Companies know this. So now what? The test doesn’t measure what it used to measure. What companies actually need is someone who can look at a broken heatmap and trace the problem from the UI through the data layer to the API to the React hook lifecycle - not write the fix, but direct the fix. There’s no interview format for that.

Both sides are frozen

Candidates don’t know how to present their skills. Companies don’t know how to evaluate them. The result: everyone stays in place, defaulting to the old frameworks (resumes listing languages, interviews testing syntax) even though both sides know they’re obsolete.


The growth hacker parallel

This has happened before. When Sean Ellis coined “growth hacker” around 2010, nobody understood it. Marketing people said “that’s just marketing.” Engineers said “that’s just A/B testing.” Product people said “that’s just product.” But it was someone who sat across all three, made decisions fast, was obsessed with outcomes over process, and used whatever tools worked.

It had the same stigma. “Growth hacker” sounded like cheating to traditional marketers - someone getting results without going through the proper channels.

What made “growth” stick as a category was that it described a priority, not a skill. A growth person’s job was to grow the thing. How - code, copy, data, ads, product changes - was secondary. The role was defined by the outcome.

The same pattern applies here. The outcome is: a product exists that didn’t exist five days ago. The method is whatever works. “Growth” spawned growth manager, growth engineer, growth marketer, head of growth - all variations once the category was established. “Builder” could do the same.


The shelf life

Everything in this article has an expiration date.

The “technical direction” layer - catching bugs, pushing back on shortcuts, making architectural decisions - is a distinct human skill today. This week, the AI tried to fix a failing test by rewriting the test three separate times. It added sleep hacks instead of fixing data flow. It described root causes without understanding them. Each time, a human caught it and redirected.

But that’s this week. The trajectory is geometric, not linear. GPT-3 to GPT-4 in two years. Code autocomplete to full-stack app generation in months. Autonomous coding agents that self-iterate already shipping. The gap between “AI that needs a director” and “AI that directs itself” is probably 2-5 years. Not a decade.

When that gap closes, the role we just spent this article naming becomes obsolete before it gets a proper job title. The physiotherapist describes what she needs, the AI builds it, catches its own bugs, iterates until it works. No middleman. No director.

And then the question isn’t “what should we call this role” - it’s what happens when making things is free. When production costs approach zero and the thing that’s scarce isn’t skill but judgment - knowing what’s worth making, who it’s for, and whether anyone should trust it.

In a world of infinite production, what stays scarce: wanting the right things, earned trust, curation, and experience itself.


The way out

The window is short. That’s exactly why the moves matter now.

For senior developers

Stop clinging to “I write code” as your identity. Your real value was never the typing - it was knowing why the code should be structured a certain way, catching edge cases before they ship, and understanding how systems fail. That value just got more important, not less. Start using AI coding tools seriously - not as autocomplete, but as a junior developer you’re directing. The developers who thrive will be the ones who can look at AI-generated code and say “this is wrong, here’s why, fix it this way.” That’s a senior skill.

For junior developers

The floor just dropped out. If your only value was writing straightforward code, you’re competing with a tool that does it faster and cheaper. The path forward isn’t to give up - it’s to skip ahead. Build whole things, not components. Use AI to handle the syntax and spend your energy on understanding why the code works, how the pieces connect, and what makes a product good versus functional. A junior who ships a complete product with AI and can explain every architectural decision is more valuable than a junior who hand-writes clean React but has never thought about whether the feature should exist.

For non-technical people

The barrier dropped but didn’t disappear. You don’t need to write code, but you need to understand what code does - what a database is, what an API does, why a timezone bug eats your data, what it means when tests pass but the app is broken. This is technical literacy, not programming. The non-technical people who will be most effective are domain experts: the physiotherapist who knows exactly what a recovery tracker should do, the logistics manager who knows which warehouse metrics matter, the teacher who knows why existing ed-tech fails. Their domain knowledge is now directly shippable.

For companies

Stop testing for syntax. A take-home coding test answered with AI tells you nothing about the candidate. Start testing for judgment. Give candidates a broken app and ask them to find what’s wrong - not fix it, find it. Give them a product spec with a bad assumption baked in and see if they catch it. Ask them to review AI-generated code and explain what’s sloppy. The interview should simulate what the job actually is: directing, catching, deciding. Not typing.

For everyone

“Ship things” is the right advice today. But it has a shelf life. The moment everyone unfreezes, you get a flood of products - most built in a weekend, most mediocre, nobody using them. Three things will separate what survives:

Domain depth. Build where you have unfair knowledge. The physiotherapist who builds a recovery tracker wins not because she can build - everyone can build now - but because she knows what 50 recovery patients actually need. The tool is commoditized. The knowledge you bring to it is not.

Quality as signal. When everything is easy to build, it will be built carelessly. A heatmap legend in the right order, a timezone that doesn’t eat your data, route segments that match how your life actually works - that obsessive attention to detail is what separates “I tried it once” from “I use it every day.”

Distribution before building. Building is becoming cheap, but finding users isn’t. A doctor who hands a recovery app to a patient is worth more than a thousand app store downloads. Figure out the distribution channel before you write the first prompt.


But let’s be honest about something. People aren’t frozen because they’re philosophically confused about the meaning of work. They’re frozen because they can see the technology replacing them and they don’t know how they’ll pay the mortgage. This article doesn’t have an answer to that. Nobody does yet. The gap between “AI can do your job” and “society has figured out how you still eat” is widening faster than anyone is closing it.


February 20, 2026. Montreal.