I want you to picture a very specific person, because this is not a philosophical debate. This is a career situation.
You’re a UX designer. You love Figma. You love the feeling of turning a messy problem into clean, tasteful UI. You love speed. You love craft. You love being the person who can crank out a polished flow while everyone else is still arguing about what the feature even is. Your portfolio is screens, screens, screens—beautiful, consistent, modern screens—and hiring managers love you for it. They barely read anything. They scroll. They nod. They go, “Yep. This person can ship.”
That person might be you. It might be your teammate. It might be half the industry.
Now here’s the moment that made me stop and stare at the wall for a while: I saw a job posting from PayPal that wasn’t shy about where this is going. It wasn’t “AI-assisted design tooling.” It wasn’t “copilot for designers.” It was basically: we want to automate the production of UI and connect it directly to live business inputs—revenue, conversion, telemetry, trends, real-time analysis, prediction—and then generate solutions continuously.
In plain English: the system sees a signal and changes the interface. Constantly. All day and all night.
And if PayPal is willing to say that in public—if they’re comfortable putting that vision in a job description—then you should assume everybody else is thinking it too, even if they’re being quieter about it. Because nobody wants to be the second company on earth to admit they’re trying to automate a whole profession. They want to be the first company to quietly succeed and act like it was “obvious.”
So if you’re sitting there thinking, “Yeah, but they can’t replace me, I have taste,” I need you to understand something, and I’m going to say it bluntly because it’s kinder than letting you keep believing it:
Taste is not a moat when your taste has already been turned into rules.
Most modern design teams spent the last decade doing something that was genuinely smart: standardizing. Tokens, components, pattern libraries, accessibility rules, spacing systems, interaction conventions. It made teams faster. It made products more consistent. It reduced chaos.
But it also did something else—something we didn’t want to think about because it ruins the vibe.
It made the work legible.
If your product has a design system that dictates what “good” looks like, then a lot of downstream UI design becomes: pick the correct component, apply the correct pattern, follow the rules, don’t break anything.
That’s not an insult. That’s how you scale.
But it also means the work is learnable in the way machines love: lots of examples, lots of constraints, lots of “approved vs rejected,” lots of history.
You don’t need a machine that understands beauty. You need a machine that predicts what will pass design review.
And we have built an entire industry around making that prediction easier.
Now, before you get mad at me, let me be fair to everyone involved, including the so-called “Figma farmers.”
A lot of designers didn’t choose to be trapped in UI polish. They were trained into it. They were hired into it. They were rewarded for it. They got promoted for it. And they got hired in the first place because that’s what our hiring processes selected for.
This part matters, and it’s not comfortable: during the pandemic hiring boom—when everyone was hiring like drunk sailors—UX teams didn’t scale by carefully selecting for deep systems thinking. They scaled by selecting for what could be evaluated quickly.
Screens.
We did it. I did it. I sat in interview loops. I watched people scroll portfolios like they were browsing Zillow. “Look at the craft.” “Look at the polish.” “Look at the number of flows.” “Look how fast they can produce.”
And bootcamps, being rational businesses, trained people to win that game. They didn’t train “how to kill a feature with a principled argument.” They trained “how to present a case study with a gorgeous Figma flow.” Because that’s what got interviews.
So it’s not that product and engineering forced design into a corner and design heroically endured. The uglier truth is that design, under pressure and incentives, overselected for visible output. We trained ourselves to prove our worth with artifacts.
And now the artifact factory is being automated.
That’s the part that should piss you off—not at the designers, but at the incentive structure we all participated in, because it’s about to cash out.
Now let’s get to the real heart of it, because if this were just “AI makes pretty UI,” it’d be annoying but manageable.
The real thing PayPal is going after is latency.
Traditional UX is slow in a very specific way. Not because designers are slow. Not because teams are dumb. Because the loop is human.
A metric moves. Someone notices. Someone convinces others it matters. Research happens. A fix is designed. It gets reviewed. It gets built. It ships. The world changes again.
PayPal’s vision is: skip the human noticing-and-coordinating part. Wire the system directly into the signals. Let it propose and implement UI changes continuously.
That is a very different world. In that world, “being fast in Figma” is not a flex. It’s like bragging that you’re the fastest person alive at hand-washing dishes while the restaurant installs an industrial dishwasher.
You’re standing there, sleeves rolled up, like, “Guys, watch me go!” and management is like, “Yeah… cool… anyway…”
Now, this is where people either get defensive or go numb, so let me ground it in two very real anecdotes, because otherwise this stays abstract and you can keep comforting yourself.
When I worked at AWS, an old friend called me and said—exact words—“Why does setting up IAM roles make me want to commit murder?”
Now, was he being dramatic? Sure. But he wasn’t wrong. IAM is not confusing because the buttons are ugly. It’s confusing because the system’s mental model—the way it thinks about identity, permissions, relationships—does not map cleanly onto how humans think about responsibility and access. It’s architecture-first. You, the user, are being asked to understand the machine’s view of the world and behave accordingly.
And what does the organization do with that? It doesn’t say, “Let’s rethink how permissions should be modeled for human beings.” It says, “UX, make it clearer.”
Which often means: build explanatory UI around an unchanging architecture. Make it navigable. Make it survivable.
The second story: I did research with music students using SmartMusic. Teenagers, yes, moody, yes—but listen to what they said. “SmartMusic makes me want to slit my wrists.” And then, quieter but worse: “SmartMusic is what made my brother quit music.”
That isn’t about UI polish. That’s about the interaction contract: what the system demands from you, what it remembers, what it punishes, what it rewards, and how it makes you feel while you’re trying to learn. It’s cognitive and emotional architecture. The interface is just the messenger.
These two stories are extreme versions of something every designer has seen: the “UX problem” is often that the product was built around the system’s mental model, not the user’s. The UI is then asked to translate the system’s worldview into something humans can tolerate.
That’s explanatory UI. And yes, it is real work. It’s hard work. It takes skill. It is not trivial. But it is exactly the kind of work that an AI system—given enough examples—can start doing at scale, because it lives downstream of decisions that are already made.
And here’s the crucial point, which is the one that actually matters if we’re talking about a future where machines generate interface all day: the real design work is not just “should we build it or not.” It’s designing the dance between system and person.
It’s deciding what the system should know and remember, and what the human should know and remember.
That sounds subtle until you realize it’s basically the whole game.
If your product makes the user remember fifteen things the system could easily remember, you’re building stress. If your product hides state the user needs to understand, you’re building confusion. If your product demands the user maintain the system’s internal picture of the world in their head, you’re building anger. If your product pushes critical memory into tooltips and docs and “learn more,” you’re building failure.
Design at its best is a kind of cognitive engineering. It’s deciding where the burden goes, and making the burden land where humans are actually good at carrying it. Humans are good at recognizing patterns, forming habits, and navigating a consistent mental model. Humans are bad at holding lots of arbitrary state, tracking invisible rules, and recovering from unclear errors without feedback.
And here’s the problem: most organizations accidentally design products that require humans to do exactly what humans are bad at, because that’s what the architecture made easiest.
Then they hire a designer to paint over it.
Now let’s talk about the scary “third outcome”, because it’s not just PayPal building a continuous optimization machine. It’s everybody vibe-coding like lunatics. PMs, engineers, VPs—all of them generating screens, flows, features, filters, settings, clever little options. The tools are so helpful they can’t stop themselves. And humans, being easily seduced by possibility, keep saying, “Sure, add that too.”
In that world, UI becomes an all-you-can-eat buffet run by a robot chef who never sleeps and never gets tired of adding “one more thing.”
Your careful, tasteful Figma work doesn’t look valuable. It looks slow. It looks fussy. It looks like you’re polishing a spoon while the kitchen is flooding with pasta.
So the move is not “I will generate more UI too.” That’s suicide. You cannot win the output contest. Output will become infinite.
The move is to become the person who can look at infinite output and say, calmly and clearly, “Most of this is noise, and here is why.”
Not as an aesthetic judgment. As an interaction judgment. As a mental model judgment.
This is where I need to be skeptical, too, because I don’t want to turn PayPal into the boogeyman and pretend they’re guaranteed to succeed. They might not. Systems wired to business metrics tend to optimize what’s measurable, and what’s measurable is often short-term. They can overfit. They can drift. They can create interfaces that juice conversion while eroding trust, comprehension, or long-term satisfaction. They can make products feel like slot machines—always adjusting, always nudging, never letting the user build a stable understanding of what the system is and how it behaves.
And when that happens—when users start saying “I don’t trust this thing,” or “it keeps changing,” or “it feels manipulative,” or “I can’t predict what it will do”—that won’t show up cleanly in your dashboard until it’s already a problem.
Which means there is still human work here. But it’s not the work most Figma-first designers have been trained for, and that’s the part we have to say without being cruel about it.
If you’ve been rewarded for speed and polish, it doesn’t mean you’re dumb. It means you played the game in front of you. If you got hired because your portfolio showed pixel-perfect flows, that’s not because you’re shallow. It’s because that’s what the market selected for. We did that. Our teams did that. Our interview loops did that. Our bootcamps responded to that. The whole ecosystem reinforced it.
But now the ecosystem is changing, and the old signal of competence—screens—won’t mean what it used to mean.
So what do you do, practically, if you’re that gung-ho designer who loves craft and doesn’t want to become an “AI policy person,” and also doesn’t want to be automated?
You don’t have to become a philosopher. You don’t have to become a PM. You don’t have to become “strategic” in a buzzword way. You have to move one layer earlier than the screen. You have to start designing interaction contracts: what the system remembers, what the user remembers, what feedback is given when things go wrong, what state is visible, what is hidden, and why.
You have to start caring about architecture—not the backend details, but the human-facing shape of it. What is this thing? What does it believe about the world? What does it require the user to believe? Can a normal person form an accurate mental model without needing a wiki?
And yes, you can start doing this even if your org is messy. You can do it in small slices. You can do it in the way you frame problems. You can do it by asking better questions before you produce UI. You can do it by writing clearer system stories: “Here’s what the system knows at this point; here’s what the user thinks it knows; here’s the gap; here’s where confusion happens.” You can do it by designing for recoverability instead of perfection. You can do it by treating “the user must remember X” as a design smell, not a requirement.
I’m not saying it’s easy. I’m saying it’s possible. Over the last year, as generative AI got serious, I found myself spending more and more time exactly there—because it’s the only place where the work doesn’t collapse into “just generate another screen.” It’s the place where you can still create clarity that isn’t cosmetic. It’s also the place where you can still make a system feel honest, stable, and learnable instead of twitchy and optimized.
Now, I’m not going to end this with some smug “so the future belongs to…” speech. That’s corny. Also, nobody knows. There are too many variables, too many organizational politics, too many ways this could go sideways.
But I do think a few things are likely.
It’s likely that UI production gets cheap enough that “fast in Figma” stops being rare. It’s likely that companies will try to wire optimization loops directly into interfaces, because it’s the obvious move if your goal is moving metrics. It’s likely that some of these systems will work well enough to change hiring immediately. It’s also likely that some will fail in ways that create new kinds of UX disasters—products that are constantly “improving” and yet increasingly incomprehensible.
And it’s likely that the designers who keep their leverage won’t be the ones who can generate the cleanest screens fastest. They’ll be the ones who can make the system-user dance make sense: who can decide what the system should carry, what the user should carry, and how to make that trade visible and humane.
So if you’re reading this as a Figma-loving, craft-proud designer, I’m not here to dunk on you. I’m here to tell you the truth I wish someone had told me earlier: a beautiful explanatory interface is still explanatory. It can still be a mask. And in a world where machines can generate masks all day, your job is not to become a faster mask-maker.
Your job is to stop building things that need masks in the first place, and when that’s not possible, to redesign the interaction contract so the user isn’t forced to carry the system’s architecture in their head like a punishment.
You can be mad at PayPal. You can be mad at AI. You can be mad at the hiring boom and the bootcamps and the “screens-only” portfolio culture. But after you’re done being mad, you still have to do something with that information.
Look, this isn’t about humiliating you for loving craft.
It’s about not letting you shrink your entire identity to a tool.
You’ve got thirty, maybe forty years left in this career. Forty years. If you think you’re going to be lovingly adjusting auto-layout constraints in 2065 like some digital watchmaker, that’s about as likely as someone making a living today hand-coloring black-and-white photographs for the newspaper.
Tools change. Entire mediums change. Nobody sits around crying that they’re not typesetting by hand anymore. The people who survived didn’t cling to the tool. They followed the work.
And the work is not “making screens.”
The work is shaping how humans and systems deal with each other.
That’s bigger than Figma. That’s bigger than AI. That’s bigger than whatever tool gets hot next year.
You’ve already proven you can learn tools. You did it once. You can do it again. That’s not the impressive part.
The impressive part—the part that actually makes you dangerous in a good way—is whether you can look at a system and say, “No. This is the wrong way for a human to have to think about this.”
Whether you can design the memory of a system so people don’t have to carry it around like a backpack full of bricks.
Whether you can see where automation should stop, not because it’s unethical in a hand-wringing way, but because it makes the interaction worse.
That’s not pixel pushing. That’s not vibe coding. That’s not aesthetic judgment.
That’s grown-up design.
And if you lean into that—if you start practicing that instead of obsessing over how fast you can produce variants—you’re not shrinking. You’re leveling up.
Yeah, the machine is coming for the easy stuff. Good. Let it. Why are you fighting to keep the repetitive part of your job? Let it have that. You’ve got better things to do.
You’ve got decades ahead of you. Decades to build things that don’t make people want to commit murder setting up permissions. Decades to build systems that don’t make kids quit music. Decades to shape how automation actually behaves in the real world.
That’s not doom. That’s an opportunity hiding inside an uncomfortable truth.
So don’t walk away from this sulking. Walk away thinking, “Alright. Fine. If the game is changing, then I’m changing with it.”
The world moves. So move with it.
Copyright © 2026 by Paul Henry Smith
Get full access to The Generative Gazette at generativegazette.substack.com/subscribe