At Devox Belgium speakers discussed three technical themes: the current status and performance improvements around Maven/Java runtimes; the realities, limits and computational cost of generative AI (including the simple math behind neural nets); and how graph databases, vector embeddings and agent/MCP-based architectures improve retrieval-augmented generation and production deployment of AI applications.
Devoxx
Devoxx - From developers, For developers. Discover upcoming developer conferences and events worldwide.
Speakers at Devoxx Belgium compared several agentic AI coding tools using the same prompt to show how model quality, tooling, and prompt/methodology affect results. They discussed mitigation strategies (spec-driven and test-driven approaches), reducing hallucinations by supplying authoritative docs via Context7/MCP or spec registries, and operational concerns like guardrails, observability, and local vs hosted inference. Java-focused tooling (LangChain4j, Quarkus, Project Panama/TornadoVM) and enterprise integration were highlighted as practical paths to production-ready agentic systems.
Robin Marx demonstrates how different browsers (Chrome, Safari, Firefox) load and prioritize page resources, how HTTP/2/3 priority signals and server behavior affect ordering, and how heuristics like Chrome’s two-phase loading and fetchpriority/preload interact with metrics such as LCP; he shows servers often ignore priorities, causing divergent page-load behavior and outlines developer mitigations and the importance of cross-browser measurement.
Barry van Someren demonstrates using Claude (an LLM coding/ops agent) to perform a wide range of DevOps tasks: provisioning VMs, deploying a sample app (Pet Clinic), configuring TLS and databases, generating test data, analyzing heap dumps, and even automating hardware. He highlights safety guardrails (plan mode, backups, console access, limited credentials), shows live demos and failure modes, and recommends initial use-cases like log analysis and one-off automation.
Neal Ford presents "Architecture as Code": expressing architectural intent via a lightweight ADL and executable architectural fitness functions (monitors, assertions, scripts) to provide fast feedback across implementation, infrastructure, data, team and enterprise intersections. He shows examples (ArcUnit-style checks, data-consistency hashes across databases, dependency constraints in monorepos), and explains how generative AI and MCP agents can translate ADL into concrete fitness functions to scale governance and reduce brittleness.
Alex Bunardzic presents Test-Driven Navigation (TDN): a TDD-inspired workflow for working with AI code generators where the developer supplies failing unit tests as precise prompts and an LLM/agent implements code to make those tests pass. He demos the approach with JavaScript tests and an LLM (Claude), uses mutation testing to validate quality, and argues TDN helps discover domain rules, reduce accidental complexity, and keep systems maintainable as AI becomes a coding partner.
Roberto Cortez demonstrates Quarkus internals and how to build extensions. He explains Quarkus’s build-time augmentation model (moving work from runtime to build time) to reduce startup time and memory, enable hot reload, dev services and continuous testing, and support GraalVM native images. He details the extension architecture (deployment vs runtime modules, build steps/build items), jandex indexing, bytecode recorders for capturing build-time state, and shows demos (a Saiyan example and a DataFaker extension) including CDI producers, hot-reload of resource files, and GraalVM substitutions/reflective registrations to make libraries native-image compatible.
A Devoxx demo showing how to upgrade a Spring Web MVC matchmaking app to Spring Boot 4 / Spring Framework 7: moduleized auto-configuration and starter changes, RestTemplate→RestClient migration, JSpecify null-safety support with build checks, new API versioning features, OpenTelemetry/Micrometer autoconfiguration for tracing and metrics, a new resilience/retry API, HTTP client/testing improvements, and a preview demo of Java structured concurrency to parallelize remote calls.
Scott Sosna likens a messy codebase to a crime scene and walks through how external pressures (customers, cloud vendors, OSS), internal actors (sales, product, security, execs), and common developer anti-patterns (copy/paste, tribal knowledge, giant PRs, rushed fixes) accumulate technical debt. He gives real-world anecdotes of breakages and hacks and closes with practical advice for engineers: own your work, pick battles, cultivate allies, document, improve skills, and have an exit strategy.
Per Minborg's Devoxx talk is a deep technical dive into Java field immutability. It explains how final fields interact with the Java Memory Model and safe publication, describes historical issues (reflection/serialization allowed final-field mutation), shows how HotSpot decides which fields it can trust for optimizations (why records/value classes can be constant-folded but plain final fields sometimes cannot), and presents planned improvements: “final means final” enforcement, strict final initialization for safe construction (Valhalla-related), and a new lazy-constant API to support safe, at-most-once lazy initialization while preserving VM optimizations. The talk closes with trade-offs and guidance on when to use final, strict final, or lazy constants.
Live-coding talk demonstrating the "hive" pattern: turn a hexagonal monolith into a microservices-ready modular monolith by vertical slicing, enforcing ports & adapters (SPI/API), introducing anti-corruption layers, splitting data and databases per module, and using in-process adapters that can later be swapped for HTTP clients. The session shows a concrete Spring Boot + Flyway refactor extracting a "nuclear reactor" module into its own microservice and discusses metrics-driven extraction and modular testing practices.
A Devoxx talk demonstrating Java Project Loom (virtual threads, structured concurrency, and scoped values) and how they work together in a Spring Boot web app. The speaker explains JVM thread behavior, shows code using StructuredTaskScope.fork/join and joiners for concurrent service calls, replaces ThreadLocal with ScopedValue for request IDs, and highlights benefits: clearer, safer concurrent code, cancellation propagation, better observability, and improved scalability with lightweight virtual threads.
Practical, production-focused talk on LangChain4j by Lize Raes. Covers advanced RAG techniques (query transformers, routing, retrievers, re-ranking), tool-calling patterns (return-immediate, dynamic tool providers, AI services as tools), MCP server tradeoffs, guardrails/permissions, nondeterministic testing and observability (model snapshots, drift), and agentic architecture patterns with demos (HR scheduler and AI drug discovery). Emphasizes design, safety, and testing needed to run LLM-based apps reliably in production.
Mario Fusco applies behavioral-economics biases to software engineering, explaining how pattern recognition, anchoring, sunk-cost, hyperbolic discounting and framing lead teams to poor decisions (bad benchmarks, technical debt, delayed refactors). He offers mitigations—measure rigorously, use prototypes and TDD, rotate roles, make future costs visible, shorten feedback loops and encourage a culture that admits uncertainty.
Florian Enner presents engineering work on modular robotics controlled via Java and GraalVM native-image. He explains why the team uses Java (MATLAB integration, JavaFX UIs), how they compile Java to native shared libraries, and how they generate stable C APIs and multi-language bindings so C++/Rust/Python code can call into Java components. The talk covers real-time and latency considerations (jitter measurements on Windows/Mac/Linux/RTOS), networking choices (UDP, zero-copy protobuf), tooling (annotation processor to generate reflect-config, API JSON generation), and deployment to embedded and mobile platforms, with practical performance and safety observations.
Conference talk recounting a complex migration of a Kubernetes-based microservices platform between AWS and GCP. Covers landing-zone and cluster setup, CI/CD/YAML conversions, managed vs self-managed monitoring, PostgreSQL bi-directional replication (pglogical) and its caveats, object-store replication and DataSync/Storage Transfer pitfalls, messaging harmonization (SQS/Kafka/PubSub), DNS traffic-shifting strategies, and operational lessons (observability, resource sizing, CIDR/pod IP limits, and secrets).
A Devoxx talk about running and experimenting with LLMs locally: it surveys local model runtimes and UIs (Olama, Ramalama/Podman AI Lab, LM Studio, etc.), explains model selection (Hugging Face landscape, families, quantization, licensing and safety), and demonstrates integrating local models into Java/Quarkus applications using MCP servers/clients and local code assistants (Continue, DevOps Genie). The speaker highlights trade-offs (performance, hardware, security) and gives practical tips for local AI development workflows.
Opening keynote at Devoxx Belgium 2025 featuring personal anecdotes about the speaker's early programming experiences, the origin of Devoxx for Kids, his career path, and a handoff to his father on stage.
Alex Gavrilescu demonstrates backlog.md, a git-backed CLI/TUI/web task manager designed to enable spec-driven development with AI agents (Codex/Codeex). He describes his evolution from ad-hoc prompting to a deterministic process: store tasks as markdown with acceptance criteria and implementation plans, split features to fit a single context window, use explicit agent instructions, and enforce review checkpoints (task spec → implementation plan → code). The workflow and backlog.md tooling substantially improved his AI-agent task success rate while noting limitations (context windows, hallucinations, need for human-in-the-loop).
A Devoxx talk that explores how generative AI is disrupting developers' professional identities and offers a reflective ROS framework (What? So what? Now what?) to process that change. Speakers combine occupational science and practical advice — small experiments with AI tools, journaling, community and mentorship, and deliberate practice — and propose the 'AI shepherd' role to orchestrate AI in complex systems. The talk cites survey data and a Stanford study showing variable AI productivity gains and highlights AI limitations in legacy architectures.