Send us a text
In this episode, we explore UICoder, a new research project that teaches large language models to generate user interface code—without human supervision. Traditionally, building a functional app interface requires developers, designers, and countless hours of testing. But UICoder flips this process on its head: instead of relying on expensive human feedback, it learns from its own mistakes through a fully automated feedback loop.
Here’s how it works. The system generates huge amounts of SwiftUI code, then automatically checks whether that code actually runs and whether the resulting interface matches expectations. Compilers act as strict teachers, catching errors, while vision–language models judge whether the design looks correct. Bad examples get filtered out, strong ones are scored and improved, and the model gradually fine-tunes itself with cleaner, higher-quality data.
The results are impressive. Starting from StarChat-Beta, a model with virtually no knowledge of SwiftUI, UICoder created nearly one million synthetic programs in just a few iterations. After training on this self-curated dataset, it reached performance levels close to GPT-4—and even outperformed GPT-4 in compilation success rates. In other words, it doesn’t just write more code, it writes code that actually works.
We’ll break down what this means for developers, designers, and anyone building digital products. Is this the beginning of AI systems that can autonomously prototype and refine interfaces? Could this reshape how apps are built, lowering the barrier for solo creators and startups? And what happens when machines become their own best teachers?