Artificial Intelligence and ESG are often discussed in sweeping, visionary terms. In practice, however, most organisations are navigating something far messier: fragmented data, nervous workforces, tightening budgets, shifting regulation and a growing gap between ambition and execution.
DelTra recently convened a peer-led roundtable on AI & Sustainability, with the discussion deliberately exploratory, focusing less on polished success stories and more on the realities leaders face as they attempt to balance innovation, responsibility, and commercial pressure. What emerged was a refreshingly honest picture of where organisations really are on their AI and ESG journeys, and the tensions shaping the next phase of adoption.
Despite the global hype, most organisations represented at the table described AI adoption as tentative and tightly controlled. For many, progress has begun with low-risk, familiar tools, most notably Microsoft Copilot, rather than bespoke AI solutions or advanced automation.
The appeal is clear: Copilot sits within existing enterprise ecosystems and offers perceived data safety. A representative of a large European cinema group noted that their primary "win" was simply getting the executive team to take a small step forward. "I've managed to get them to realise that there was a bigger risk by them doing nothing than taking a very small step forward," he explained. "We now use Copilot...it’s in the safe Microsoft system that we already have...it kept our data safe".
However, this caution comes with trade-offs. Leaders acknowledged that AI use is often shallow and difficult to justify at scale due to licensing costs. A representative of a broadcast television channel mentioned they are closely monitoring usage data to decide "what level of Copilot we will actually continue to roll out," adding that "if there's not much to pick up on this, we'll probably roll it back just for terms of cost savings".
If there was one point of near-universal agreement, it was this: poor data is the biggest blocker to meaningful AI adoption. Across sectors, leaders described fragmented systems and legacy platforms that make the implementation of advanced AI nearly impossible today.
"We tested out a platform for the people team...but we just didn't have the data," one panellist admitted. "The quality of our data just wasn't there". This sentiment was echoed by others who are now doubling down on data consolidation over flashy AI pilots. Richard noted that his organisation's systems are "all fragmented" due to various mergers, making data the immediate priority: "If we're going to put money into anything, it's the data piece at the moment... getting our data well looked after, organised in a way that we can then step into AI".
This pragmatic mindset reflects a shift toward progress that is commercially defensible. Leaders are increasingly looking at automation as the "massive win" that doesn't necessarily require AI. As Archer puts it, "I think people forget that's just as big a win as you don't need AI, automation will do half the stuff already".
While technology dominated much of the discussion, the human dimension of AI adoption surfaced repeatedly. In people-centric organisations, AI is still viewed with significant suspicion.
Archer highlighted a specific phenomenon he called "job-hugging," where employees stay in roles they might otherwise leave because they are "petrified of what's going on out there". In industries where people are the brand differentiator, there is vocal resistance: "There’s a lot of, 'you can't take our jobs away, and you can't automate what we do'".
At the same time, there is a disconnect between how employees use technology personally versus professionally. While younger workers may be "digital natives," there is a "freezing effect" in the workplace due to a lack of governance or clear permissions. One participant noted the irony of employees being "fascinated with AI" at home but "nervous" in a professional context where they fear for their expertise.
Few topics generated as much candour as the shifting status of ESG. While most organisations have invested in sustainability, many leaders described a noticeable cooling of momentum, with ESG often being reframed from a strategic differentiator to a "tick the box" compliance exercise.
A panellist shared a candid view of the global divide: "All they care about in America is... just ticking the box, what have we got to do to make legislation requirements and nothing else. They don't want to do anything green or sustainable at all". However, he noted that in Europe, the pressure remains high because "employing very young people, they want to know what our statement is on these things".
Rather than large-scale "green" programmes, ESG activity is increasingly driven by commercial necessity. Sustainability initiatives are often progressing because they reduce energy bills or improve asset efficiency. As the roundtable discussion progressed, it became clear that while the "S" and "G" (Social and Governance) are becoming highly regulated, the "E" (Environmental) is often driven by a "cost conversation".
When the conversation turned to "responsible AI," the tone became notably realistic. Most leaders acknowledged that while ethical frameworks and steering groups are important, they are often outpaced by real-world behaviour.
The EU AI Act featured prominently, with participants discussing the need for a "fair, governed, safe" way to implement technology. However, there was scepticism about the maturity of these frameworks. "We are highly regulated...so they have just put in place a steering group for AI...they were in early stages," one leader mentioned, adding that even with policies in place, it "doesn't stop people" from using personal tools like ChatGPT.
One of the most energising themes was the role of employee-led innovation. Rather than relying solely on external vendors, some organisations are giving employees the funding to build their own tools.
One participant described a "Dragon's Den"- style competition in which employee groups produce technology presentations for the global board. "The top three get the funding... and that's our technology, the people inside are doing it," they explained. This approach has led to practical wins, such as an AI tool that allows employees to simply "pick up our phone, click it, and then AI reads it and puts it in the expense system". This model not only reduces licensing costs but also boosts staff engagement by addressing real "pain points".
Perhaps the most reassuring takeaway from the roundtable was the admission that no one has it figured out. Despite headlines suggesting rapid, transformative AI adoption, most organisations are still in the testing phase. Archer noted that for more complex AI forecasting models, it would likely "take us a good 18 months...to set up and test," and they would need to run it alongside human teams "for a very long time".
There is pressure to move faster, but also a growing recognition that rushing ahead without foundations creates more risk than value.
What this roundtable revealed was not a lack of ambition, but a recalibration of realism. AI is moving fast, but organisations are moving carefully. ESG still matters, but its drivers are evolving toward commercial and employee-led needs.
For senior leaders, the challenge is to navigate the "grey space" where progress is incremental and human factors dominate. As DelTra continues to facilitate these peer-led discussions, our goal is to turn this honest dialogue into a credible, grounded transformation.
If you’d be interested in attending one of our future events, either as an attendee or a potential speaker, you can register your interest here.
13th February
Events From the Experts Industry Insight Blogs
Related insight
Looking to
transform?
Quicklinks
Address
Deltra Group
52-54 Gracechurch St
London
EC3V 0EH
Contact
+44 (0)207 375 9500
info@deltragroup.com
