A Conversation on Building Smarter AI Agents with Neon and Wordware
Fast iteration is key, from the IDE all the way down to the infra
A few weeks ago, we participated in a panel talking about AI Agents with Wordware. For those of you who prefer to scroll through it, here’s a summary of our conversation—plus a Wordware demo.
info
The interlocutors in this conversation were Raouf Chebri, Sr Developer Advocate at Neon (asking the questions), and Robert Chandler, Co-founder and CTO of Wordware.
What are the challenges teams are facing when building AI agents?
A huge challenge we see often among teams building AI agents is managing the slow feedback loop. This is especially common when engineers aren’t domain experts. Without a clear understanding of what “good” output looks like in your agent, engineers have to rely on someone else for feedback which slows down development significantly. Constant iteration is key for AI agents. It’s essential to put the person who knows what “good” means—the domain expert—more in the driver’s seat so teams can achieve a much faster iteration cycle.
This challenge is what inspired us to build Wordware. Wordware allows engineers and product teams to collaborate seamlessly within a prompt-first environment that’s both intuitive and powerful. This way, teams can experiment, adapt, and refine agents in real time, drastically improving development efficiency and agent quality.
My co-founder and I met over 10 years ago while studying machine learning at Cambridge, so we’ve been around AI for a while. Before Wordware, we each went on different paths within industry—he developed early memory-augmenting tech using GPT-2, I worked on self-driving cars—but in both experiences, we saw how crucial it was to tighten feedback loops to succeed.
What sets Wordware apart from other agent frameworks out there?
Something different about Wordware is that it’s designed from first principles for prompt engineering, blending the flexibility of natural language with the structure of programming. Wordware has a prompt-first experience that allows both engineers and non-technical team members (e.g. the domain experts) to collaborate via an intuitive, web-based IDE. This allows the team to iterate on prompts directly, move much faster, and build more effective agents as a result.
Another key differentiator of Wordware is its modularity. Users can create specialized, reusable components that make up narrow agents, each designed for specific tasks, and then connect these components to build more comprehensive solutions. Wordware also supports multiple LLM—teams can optimize their agents based on the strengths of each model without reworking their entire system.
This adaptability with different LLMs is interesting.
Yeah, we built Wordware to be completely model-agnostic so users can leverage the best LLMs for their specific needs without any fuss. Sometimes, you want a model that’s fast and cost-effective—something like Llama or Mistral is perfect for that. Other times, you need deeper reasoning or creativity, and that’s where GPT-4 or Claude shine.
What’s cool about this approach is that it’s as easy as switching tabs. You’re not rewriting code or changing SDKs; you’re just picking the model that best fits your task. For example, GPT models are sharp for logic and precision, but Claude has this subtle, more tasteful style for writing or summarizing content. So if you’re crafting a blog post you might go with Claude, and for intense reasoning or math-heavy tasks, you’d choose GPT.
You can even combine these models within a single workflow. Need GPT to handle some reasoning and then Claude to polish the language? No problem, Wordware lets you chain these capabilities together seamlessly, so you get the best of all worlds in one place. It’s all about letting the models play to their strengths and building the most effective AI workflows without being locked into a single option.
I heard that Wordware had a viral moment recently…
We launched an agent that analyzed Twitter feeds and offered personality insights, including a roast feature that people absolutely loved.
Overnight it became a hit, and we were not expecting it. We saw a big traffic spike that turned out to be a real stress test for our infrastructure. That’s when Neon‘s serverless architecture and autoscaling truly proved useful.
Tell us more about how Neon supported you!
Because we’re using Neon, we didn’t have to scramble to manually scale up or worry about provisioning additional resources—it all happened automatically. Neon provisions new databases really fast, and these databases autoscale up very quickly also, which was critical given how much our load was increasing.
This whole experience made clear the importance of having dynamic autoscaling in your database, especially for applications where traffic can fluctuate wildly. We also love Neon’s branching, which lets us create isolated, on-demand environments that mirror production data without the overhead of duplicating it. This makes it easy for us to quickly test and iterate new features and schema changes, shipping faster and with fewer mistakes.
To learn more about Neon and how it powers startups like Wordware, explore our case studies. Neon has a Free Plan – you can get started right away, no credit card required and no questions asked.