Ubitium tackles edge AI and more with new universal processor
Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More
As enterprises continue to explore different ways to optimize how they handle different workloads in the data center and at the edge, a new startup, Ubitium, has emerged from stealth with an interesting, cost-saving computing approach: universal processing.
Led by semiconductor industry veterans, the startup has developed a microprocessor architecture that consolidates all processing tasks – be it for AI inferencing or general-purpose tasks – into a single versatile chip.
This, Ubitium says, has the potential to transform how enterprises approach computing, saving them the hassle of relying on different types of processors and processor cores for different specialized workloads. It also announced $3.7 million in funding from multiple venture capital firms.
Ubitium said it is currently focused on developing universal chips that could optimize computing for edge or embedded devices, helping enterprises cut down deployment costs by a factor of up to 100x. However, it emphasized that the architecture is highly scalable and can also be used for data centers in the future.
It is going up against some established names in the edge AI compute space such as Nvidia with its Jetson line of chips and Sima.AI with its Modalix family, showing how the race to create AI-specific processors is moving down funnel from large datacenters to more discrete devices and workloads.
Why an all-in-one chip?
Today, when it comes to powering an edge or embedded system, organizations rely on system-on-chips (SoCs) integrating multiple specialized processing units — CPUs for general tasks, GPUs for graphics and parallel processing, NPUs for accelerated AI workloads, DSPs for signal processing and FPGAs for customizable hardware functions. These integrated units work in conjunction to ensure that the device delivers the expected performance. A good example is the case of smartphones which often use NPUs with other processors for efficient on-device AI processing while maintaining low power consumption.
While the approach does the job, it comes at the expense of increased hardware and software complexity and higher manufacturing costs — making adoption difficult for enterprises. On top of it, when there’s a patchwork of components on the stack, underutilization of resources can become a major issue. Essentially, when the device is not running an AI function, the NPU for AI workloads would just be idling, taking up the silicon area (and energy).
To fix this gap, Martin Vorbach, who holds over 200 semiconductor patents licensed by major American chip companies, came up with the universal processing architecture. He spent 15 years developing the technology and eventually teamed up with CEO Hyun Shin Cho and former Intel exec Peter Weber to commercialize it.
At the core, Shin Cho explained, the microprocessor architecture allows the same transistors of the chip to be reused for different processing tasks, thereby enabling a single processor to dynamically adapt to different workloads, right from general computing required for simple control logic to massive parallel data flow processing and AI inferencing.
“As we reuse the same transistors for various workloads, replacing an array of chips and reducing complexity, we lower the overall cost of the system. Depending on the baseline, this is a performance/cost ratio of 10x to 100x…The reuse of transistors for different workloads drastically reduces the overall transistor count in the processor — further saving energy and silicon area,” the CEO added.
Goal to make advanced computing accessible
With the homogeneous, workload-agnostic microprocessing architecture, Ubitium hopes it will be able to replace conventional processors – CPUs, NPUs, GPUs, DSPs, and FPGAs – with a single, versatile chip. The consolidation (leading to simplified system design and lower costs) will make advanced computing more accessible, enabling faster development cycles for applications across consumer electronics, industrial automation, home automation, healthcare, automotive, space and defense.
The architecture is also fully compliant with RISC-V, the open-source instruction set architecture for processor development. This makes it easy to utilize for applications like IoT, human-machine interfaces and robotics.
“By lowering the barrier for high-performance compute deployment and AI capabilities, our technology allows IoT devices to process data locally and make intelligent decisions in real-time. This will also help solve interoperability issues by enabling devices to adapt and communicate seamlessly with diverse systems,” Cho explained.
At this stage, the company has 18 patents on the technology with an FPGA emulation-based prototype and is moving to develop a portfolio of chips varying in array size but sharing the same underlying universal architecture and software stack. It plans to launch a multi-project wafer prototype with a development kit in the coming months and ship the first edge computing chips to customers in 2026.
Ultimately, Cho said, the work will allow them to offer scalable computing solutions for different (and evolving) performance needs, from embedded devices to large-scale edge computing systems.
“Our workload-agnostic processor will also be able to adapt to new AI developments without hardware modifications. This will enable developers to implement the latest AI models on existing devices, reducing costs and complexity associated with hardware changes.… By separating the hardware and software layers, we aim to establish our processor as a standard computing platform that simplifies development and accelerates innovation across diverse industries,” he added.