Engineering with AI is no longer about proving capability. It is about ensuring reliability. While LLMs offer immense potential, their non-deterministic nature creates a significant gap between a successful PoC and an enterprise-grade application.
This track focuses on the engineering fundamentals required to bridge that gap. We move past the hype to explore the tools, techniques, and architectural patterns needed to design, build, and maintain scalable AI-native systems.
What you will learn
- Production Patterns for AI Agents: Practical methods to move agents from promising automation to reliable enterprise tools.
- Evaluation & Assessment Frameworks: How to measure and validate non-deterministic systems in production environments.
- Scaling AI-Native Architecture: Blueprints for integrating AI into the full software lifecycle without compromising system stability.
- Strategic Investment Guidance: Frameworks for technical leaders to decide when, where, and how to invest in emerging AI technologies.
Why this matters now
Roadmaps are rapidly adding AI-assisted features, but delivery velocity is often throttled by concerns over safety and reliability. This track provides the practitioner-led patterns to de-risk your AI implementation and turn experimental models into durable, production-ready systems.
From this track
Reliable Retrieval for Production AI Systems
Tuesday Mar 17 / 10:35AM GMT
Search is central to many AI systems. Everyone is building RAG and agents right now, but few are building reliable retrieval systems.
Lan Chu
AI Tech Lead and Senior Data Scientist
Rewriting All of Spotify's Code Base, All the Time
Tuesday Mar 17 / 11:45AM GMT
We don't need LLMs to write new code. We need them to clean up the mess we already made.In mature organizations, we have to maintain and migrate the existing codebase. Engineers are constantly balancing new feature development with endless software upkeep.
Jo Kelly-Fenton
Engineer @Spotify
Aleksandar Mitic
Software Engineer @Spotify
Refreshing Stale Code Intelligence
Tuesday Mar 17 / 01:35PM GMT
Coding models are helping software developers move faster than ever, but weirdly, the models themselves are not keeping up. They are trained on months-old snapshots of open source code. They have never seen your internal codebase, let alone the code you wrote yesterday.
Jeff Smith
CEO & Co-Founder @ Neoteny AI, AI Engineer, Researcher, Author, Ex-Meta/FAIR
Beyond Context Windows: Building Cognitive Memory for AI Agents
Tuesday Mar 17 / 02:45PM GMT
AI agents are rapidly changing how users interact with software, yet most agentic systems today operate with little to no intelligent memory, relying instead on brittle context-window heuristics or short-term state.
Karthik Ramgopal
Distinguished Engineer & Tech Lead of the Product Engineering Team @LinkedIn, 15+ Years of Experience in Full-Stack Software Development
Building an AI Gateway Without Frameworks: One Platform, Many Agents
Tuesday Mar 17 / 03:55PM GMT
Early AI integrations often start small: wrap an inference API, add a prompt, ship a feature. At Zoox, that approach grew into Cortex, a production AI gateway supporting multiple model providers, multiple modalities, and agentic workflows with dozens of tools, serving over 100 internal clients.
Amit Navindgi
Staff Software Engineer @Zoox
Async Agents in Production: Failure Modes and Fixes
Tuesday Mar 17 / 05:05PM GMT
As models improve, we are starting to build long-running, asynchronous agents such as deep research agents and browser agents that can execute multi-step workflows autonomously. These systems unlock new use cases, but they fail in ways that short-lived agents do not.
Meryem Arik
Co-Founder and CEO @Doubleword (Previously TitanML), Recognized as a Technology Leader in Forbes 30 Under 30, Recovering Physicist