When I first began studying Law at university in London, I did not imagine that I would one day be building artificial intelligence (AI) products. Yet, even during my undergraduate studies, I found myself drawn to the intersection of AI and law. My dissertation explored how the fair dealing exception in UK copyright law could be expanded to allow the training of large language models (LLMs) on UK data in ways that both protect rightsholders and promotes innovation.
I am now the founder of Brilliant AI, a UK-based research and development company with a mission to build the most advanced AI agents for developers and a long-term vision of building artificial general intelligence (AGI). This journey has been both exhilarating and challenging. I have learned firsthand how difficult it is for a founder in the UK to compete at the frontier of generative AI.
This is my firsthand account of that journey—what I have seen, what I have learned, and what I believe the UK must do if it truly wants to produce its own OpenAI or Anthropic.
In November 2022, when ChatGPT was released, I was a second-year Law student and a self-taught programmer. The moment I interacted with GPT-3.5, I understood that my career path had changed. These models were not just tools—they were digital brains, capable of learning and processing information in orders of magnitude faster than humans.
It was clear to me then, and remains clear today, that any country unable to produce these digital brains for itself risks being left behind economically, technologically and geopolitically. The requirements for building such systems can be boiled down to three things: data, compute and talent.
The United States has become the undisputed leader in frontier AI by bringing these three together at scale. OpenAI's models dominate benchmarks, and its consumer product, ChatGPT, has around 700 million weekly active users. Even the global Stargate project—a buildout of new data centres with projected costs in the $400-500 billion range—is a direct result of demand for OpenAI's models.
The UK certainly has the talent to play at this level. We have done it before. DeepMind, founded in London, pioneered reinforcement learning breakthroughs such as AlphaGo. For years, it was regarded as the leading AI lab globally until the launch of ChatGPT shifted the spotlight to OpenAI .
When I started Brilliant AI, I wanted to build frontier models straight away. But I quickly ran into a wall. The funding requirements were astronomical, and the UK ecosystem lacked mechanisms for supporting early-stage researchers who wanted to train models at scale. By contrast, US venture capital firms like Andreessen Horowitz provide grants to independent labs to support open-source model training. In the UK, there are no equivalent grassroots opportunities.
I pivoted to building AI products that relied on existing frontier models. Using GPT-4 and later reasoning models like OpenAI's o1-mini, I built Bril AI, an agent designed to help students learn more effectively. But this made me, like many other UK founders, an ‘AI taker’ rather than an ‘AI maker’.
I also launched LlamaCloud, a platform to help developers build with open-source models such as Llama 3 and Mistral. The idea was to create a sovereign alternative for UK developers. But, I soon found that most developers and businesses wanted accuracy and reliability above all. The open-source models still lagged behind proprietary US models in performance.
In short, I discovered the hard way, that the UK has talent and ambition but lacks the structural support to turn those into frontier companies. These days, I am developing BrilliantCode, the world’s most advanced autonomous AI software engineer built for real-world, production-grade software engineering — although it is currently powered by the gpt-5 family of models my goal is for future versions to be powered by coding models we have trained, similar to how Cursor and Windsurf in the US started with models from OpenAI and Anthropic but are now training models of their own for use in their agentic IDEs.
Funding gaps. Training frontier models requires billions of pounds in capital expenditure. The UK government has invested around £1 billion in compute and AI research initiatives between 2022 and 2023. Compare this with OpenAI's projected spend of $500 billion by 2030. Without attracting external investments on this scale, UK startups are structurally disadvantaged.
Copyright uncertainty. The UK's copyright regime is restrictive compared to the US fair use doctrine. Investors hesitate to fund model training when there is uncertainty about whether training datasets could lead to litigation. The EU AI Act has compounded this by introducing compute thresholds for 'systemic risk' models, triggering costly transparency and compliance obligations.
Lack of sovereign models. While Mistral in France represents Europe's strongest open-source effort, its models remain behind the frontier. The UK currently has no widely adopted sovereign model. This leaves startups dependent on US labs or, increasingly, Chinese-trained models that may not align with British values.
Regulatory fragmentation. Meta's multimodal Llama models, for example, cannot be legally deployed in Europe due to licensing restrictions. UK founders face a fragmented compliance landscape that makes it hard to build scalable businesses.
Beyond funding and regulation, there is a more fundamental issue: the UK's data centre and energy infrastructure is simply not competitive for large-scale AI training workloads.
As Kao Data's recent research highlights, the UK has some of the highest industrial electricity prices in the G7. At current prices, 1GW of power across 12 months costs approximately £1.8 billion in the UK, compared to just £438 million in the United States. This pricing disparity alone makes the UK an unattractive location for the energy-intensive compute required to train frontier AI models.
Grid connection delays of up to 15 years in some regions further compound the problem, as does the lack of data centre eligibility for Energy Intensive Industries (EII) relief—despite data centres being designated as Critical National Infrastructure.
Encouragingly, there are signs of progress. Companies like NVIDIA, Microsoft, OpenAI, Nscale, and Kao Data are investing heavily in UK compute infrastructure, however the success of all this depends on power pricing in the UK being tackled. Kao Data, in particular, has a proven track record deploying AI startup and hyperscale cloud infrastructure within the UK and understands the critical connection between sovereign compute capacity and AI sovereignty.
But compute infrastructure alone is not enough. The UK needs regulatory clarity around training data use, competitive energy pricing, a culture of risk-tolerant venture investment, and the ability to retain and attract top talent that currently gravitates toward Silicon Valley.
For now, my focus at Brilliant AI is on building advanced agents powered by existing frontier models. The goal is to create the best possible user experience, achieve adoption at scale, and then use that demand to justify training our own models in the future.
The roadmap is clear: build advanced reasoning models that can autonomously work on complex tasks, develop computer-use agents that can safely automate any digital workflow, and ultimately become a full-stack UK AI company that not only builds products but also trains and deploys world-class frontier models.
The UK has every reason to want a frontier AI company of its own. These digital brains will define the next era of human progress, from healthcare to defence to education. We cannot afford to be perpetual takers of someone else's models.
The ingredients of talent, ambition and early research excellence are well and truly here. But without bold investment, regulatory clarity, competitive energy pricing, and a long-term commitment to sovereignty, UK founders will continue to face the same hard reality I have faced: the path of least resistance is to build on US models.
If the UK is serious about producing its own OpenAI or Anthropic, then now is the time to act. Otherwise, we will remain takers of others' AI innovation, rather than AI makers, shaping it.
*To hear more from Jennifer Umoke - please download her Critical Careers Podcast here.