In the age of AI, the definition of great software architecture is being rewritten. While Large Language Models capture the headlines, the real challenge for architects lies in the infrastructure that surrounds them. It is no longer enough to simply "plug in" an LLM; designing for the next generation of software requires a fundamental rethink of how we manage context, ensure data safety, and evaluate non-deterministic systems. The shift from static applications to dynamic, autonomous agents demands a move from simple prompt engineering to robust Context Engineering.
The hurdles are significant: How do you prepare a data platform to serve the unique needs of Agentic Systems? How can we leverage Semantic Knowledge Layers—like Knowledge Graphs and Ontologies—to reduce hallucinations and provide agents with high-precision reasoning? As we move toward modular ecosystems, mastering standardized protocols like the Model Context Protocol (MCP) becomes essential for building maintainable and efficient agent-data integrations. Furthermore, the traditional testing playbook is being replaced by AI Evals, requiring new techniques to monitor and improve app behavior in the wild.
Join us at the Architectures in the Age of AI track, where we move beyond the hype to explore the actionable blueprints of the AI era. From mastering context to scaling production-grade evaluations, you will gain the insights and strategies needed to build reliable, safe, and sophisticated AI-driven systems. This is your chance to discover how to architect a future where AI doesn't just exist in your app—it excels.
From this track
The Right 300 Tokens Beat 100k Noisy Ones: The Architecture of Context Engineering
Wednesday Mar 18 / 10:35AM GMT
Your agent has 100k tokens of context. It still forgets what you told it two messages ago.
Patrick Debois
AI Product Engineer @Tessl, Co-Author of the "DevOps Handbook", Content Curator at AI Native Developer Community
Baruch Sadogursky
DevRel Team and Context Engineering Management @Tessl AI, Co-Author of #LiquidSoftware and #DevOps Tools for #Java Developers, Java Champion, Microsoft MVP
Beyond Benchmarks: How Evaluations Ensure Safety at Scale in LLM Applications
Wednesday Mar 18 / 11:45AM GMT
As LLM systems move from prototypes to production, the gap between benchmark performance and real-world reliability becomes impossible to ignore. Models that score well on benchmarks can still fail unpredictably when facing the complexity, ambiguity, and edge cases of real users.
Clara Matos
Director of Applied AI @Sword Health, Focused on Building and Scaling Machine Learning Systems
Building an AI Ready Global Scale Data Platform
Wednesday Mar 18 / 01:35PM GMT
As organizations move from single-cloud setups to hybrid and multi-cloud strategies, they are under pressure to build data platforms that are both globally available and AI-ready.
George Peter Hantzaras
Engineering Director, Core Platforms @MongoDB, Open Source Ambassador, Published Author
Your Agent Sandbox Doesn't Know My Authz Model: A Standard-Shaped Hole
Wednesday Mar 18 / 02:45PM GMT
Sandboxes are the first line of defence for agentic systems: restrict the bash commands, filter the URLs, lock down the filesystem. But sandboxes operate on the syntax of requests, not the semantics of your authorization model.
Paul Carleton
Member of Technical Staff @Anthropic, Core Maintainer of MCP
Explicit Semantics for AI Applications: Ontologies in Practice
Wednesday Mar 18 / 03:55PM GMT
Modern AI applications struggle not because of a lack of models, but because meaning is implicit, fragmented, and brittle. In this talk, we’ll explore how making semantics explicit (using ontologies and knowledge graphs) changes how we design, build, and operate AI systems.
Jesus Barrasa
Field CTO for AI @Neo4j