AI Expo 2026 Day 2 Highlights: From Pilot Projects to Production
The second day of the combined AI & Big Data Expo and Digital Transformation Week in London showcased a clear shift in the AI landscape. After initial excitement about generative models, enterprise leaders are now focusing on the real challenges of integrating these tools into existing systems. Sessions moved away from just talking about large language models and instead emphasized the infrastructure needed to support them, like data management, observability, and compliance.
Data Quality and Maturity Are Key to AI Success
Experts highlighted that the reliability of AI depends heavily on data quality. DP Indetkar from Northern Trust warned against letting AI become a “B-movie robot,” which happens when algorithms fail due to poor inputs. He stressed that organizations must first develop analytics maturity before deploying AI solutions. Without a solid data foundation, automated decision-making can amplify errors instead of reducing them.
Supporting this view, Eric Bobek from Just Eat explained how data and machine learning are guiding decisions at a global level. He pointed out that investing in AI layers is pointless if the underlying data remains fragmented. Mohsen Ghasempour from Kingfisher added that transforming raw data into real-time insights is essential for retail and logistics firms. Reducing the delay between data collection and insight generation is vital for seeing a return on investment.
Scaling AI Safely in Regulated Industries
In sectors like finance, healthcare, and legal services, the tolerance for errors is extremely low. Pascal Hetzscholdt from Wiley emphasized that responsible AI in these fields must prioritize accuracy, clear attribution, and data integrity. Enterprise systems in regulated industries need audit trails to meet compliance requirements. Black box AI systems that lack transparency are considered too risky, as they could lead to regulatory fines or reputational damage.
Konstantina Kapetanidi from Visa discussed the challenges of building multilingual and scalable generative AI that can use tools like databases or external APIs. Such models are increasingly acting as active agents rather than just text generators, which introduces new security concerns. Parinita Kothari from Lloyds Banking Group highlighted the importance of ongoing oversight for AI systems. She challenged the outdated “deploy-and-forget” mentality, advocating for continuous monitoring similar to traditional software infrastructure.
Transforming Software Development with AI
AI is also revolutionizing how developers write code. A panel featuring representatives from Valae, Charles River Labs, and Knight Frank explored how AI copilots are changing software creation. While these tools can speed up coding, they also shift developers’ roles toward reviewing and designing architecture. This shift requires new skills and training to keep up with AI-assisted workflows.
A separate panel with experts from Microsoft, Lloyds, and Mastercard discussed the skills gap that currently exists. Many developers lack the experience needed for an AI-enhanced environment. To address this, companies need to invest in training programs that prepare their workforce for the future of AI-driven software development.
Overall, the second day of the expo made it clear that moving from experimental pilots to full-scale AI production requires more than just advanced models. Building reliable, compliant, and scalable AI systems depends on robust data management and continuous oversight. As AI becomes more integrated into enterprise operations, organizations must adapt their infrastructure, skills, and processes to ensure success.















What do you think?
It is nice to know your opinion. Leave a comment.