Oxford Semantic Technologies is an example of what Matt Clifford, chair of the Advanced Research and Invention Agency (ARIA), was thinking about when he drafted the proposals for the government’s 50-point AI opportunities action plan.
In 2017, three University of Oxford professors – Ian Horrocks, Boris Motik and Bernardo Cuenca Grau – formed Oxford Semantic Technologies to take to market a novel approach to data discovery, which uses knowledge representation and reasoning (KRR).
KRR is a branch of artificial intelligence (AI) that represents a logical and knowledge-based approach. Unlike machine learning, which finds patterns in vast datasets and draws statistical outputs, KRR aims to improve the accuracy of AI inference by making logical and explainable decisions based on data combined with expert knowledge.
The company’s technology caught the attention of Samsung Electronics and was acquired by Samsung in July last year. Co-founder Horrocks was among the speakers at last month’s Galaxy Unpacked event, where Samsung unveiled its latest flagship smartphone, the Galaxy S25.
In a recent podcast interview with Computer Weekly recorded after Galaxy Unpacked, Horrocks described the experience as “pretty amazing”, which he says was the culmination of many years of research.
The DeepSeek effect
What is interesting about the technology developed by Oxford Semantic Technologies is that it does not require vast amounts of compute to run AI.
“One of the reasons why Samsung was so excited about our knowledge graph system is the fact that it can run on the phone,” says Horrocks. “You can build it with a relatively small footprint and relatively small compute requirement.”
One of the benefits of using on-device AI, as Horrocks points out, is there is no need to move potentially sensitive personal data into the cloud. “You can do everything on your own device, so you’re in control. The AI on the phone can’t use what it can’t see and isn’t sharing your sensitive personal data,” he says.
The idea that AI does not require vast farms of hugely expensive high-performance compute is a departure from the generally accepted approach deployed across the industry. In fact, a direct effect of China’s DeepSeek is that it demonstrates to the AI world that AI can be done on the cheap.
This resulted in financial market turmoil, especially as server manufacturers and the likes of Nvidia have modelled their projected growth on the exponential rise in demand for AI servers, powered by the latest and most expensive generation of graphics processing units (GPUs).
When asked about DeepSeek, Horrocks says: “I still think it’s very interesting how it challenges the orthodox view that generative AI [GenAI] is just all about compute power for training and inference.”
In fact, anyone can download the open source distribution of DeepSeek and run it locally on a personal computer. But for many, the incredibly low cost of $0.14 per million tokens to query the cloud-based version undercuts rival large language models (LLMs).
The US has banned the export of high-end chips from Nvidia to China in a bid to stifle Chinese AI research and development.
With more details of DeepSeek being revealed, what is becoming apparent is that its R1 model used “inferior” AI processor technology. Unlike the US AI firms, which have access to the latest Nvidia GPUs, it has now transpired that DeepSeek used Huawei Ascend 910C AI chips for inference.
According to Huawei, the Ascend 910C is slightly inferior to the Nvidia H100 in performing basic learning tasks. However, it claims that the Ascend 910C can offer lower energy costs and higher operational efficiency than its more powerful rival. Yet, even with the lower processing performance, benchmarks show that DeepSeek is just as good as OpenAI.
On-device AI
Irrespective of what political instruments are used to target DeepSeek – which, at the time of writing, is being sanctioned by a growing number of countries – its existence demonstrates that an LLM can run on relatively low-spec hardware. It can be downloaded from an open source repository on GitHub and there are small versions of the model that can run entirely disconnected from a network on a PC or Mac.
But Horrocks and the team at Oxford Semantic Technologies have been able to get AI to run on even smaller devices. Oxford Semantic Technology’s RDFox has a very small footprint in terms of processing requirements. “One of the reasons why Samsung was so excited about our knowledge graph system is that RDFox can actually run directly on the phone,” says Horrocks.
He adds that one of the challenges the people behind Oxford Semantic Technologies were able to tackle is how to achieve what he calls “logical reasoning” in a small footprint. This ultimately led to the RDFox product.
“I was working with a brilliant colleague at Oxford called Boris Motik, who had this idea to address this problem using a combination of modern computer architecture with some very clever, novel data structures and algorithms,” he adds.
According to Horrocks, the main benefit – which RDFox shares with DeepSeek – is that potentially sensitive personal data does not need to be uploaded to the public cloud for processing. “You can do everything on your own device, so you’re in control. You can control what the AI on the phone can see and what it can’t see, and you know that it isn’t sharing your sensitive personal data,” he adds.
It is too early to tell how significant on-device AI will become. It is already providing enhanced and richer internet search results, but this is just scratching the surface of what is possible once millions of people have AI on a device that they carry around everywhere. The Now brief feature – which, on the Galaxy S25, summarises the day’s activities – is something people may find useful.
“One of the things that’s important is that it integrates across multiple devices,” says Horrocks. “So, when you go to the gym or you’re walking the dog, maybe you have a wearable device, the device’s AI can learn a user’s daily routine.”
And, as Horrocks points out, this represents a big opportunity for Samsung.