Stitch, Google’s designer agent, now leverages Gemini 2.5 Pro at its full capacity. Output quality and variety have both increased compared to recent weeks. The most notable experimental addition is Interactive Mode, where users can click elements in a design to prompt the AI to predict and render subsequent screens. Generation speed remains slow, but the feature demonstrates the underlying concept: UIs generated on demand, dynamically, without pre-coded layouts. This aligns with broader industry speculation that future web experiences could become AI-generated and context-responsive, rather than fixed.
BREAKING 🚨: Google keeps working on the Interactive Mode for Stitch, where AI will predict how the UI should look after navigating to the next page.
— TestingCatalog News 🗞 (@testingcatalog) October 24, 2025
It is a glimpse into how the internet will work in the future. All UI will be generated on the fly and react to user… pic.twitter.com/6RuVsVw465
For users, the immediate benefit is rapid prototyping and UI exploration, especially for product teams, designers, and developers. Interactive Mode, though still hidden, enables stepwise interface building and iteration, pointing to a shift in how digital experiences are designed and tested. Additional features in development include Annotations and a new Image Mode, expected to generate inspiration images, likely powered by nano-banana or a similar model.
Another pending capability is design export to Jules. This will allow users to snapshot their Stitch project, then send the screenshot directly into Jules for automated workflow execution. This ties Stitch into Google’s ecosystem, reinforcing its product strategy of deep agentic integration and cross-app automation. None of these features have public release dates yet. TestingCatalog observed that the changes mark a shift towards more generative, modular workflows in UI design.