Robots AtlasRobots Atlas
May 4, 2026 · 5 min readEka RoboticsVision Force ActionVFA model

Eka Robotics Exits Stealth with VFA Model — Betting on Force, Not Imitation

Eka Robotics Exits Stealth with VFA Model — Betting on Force, Not Imitation

On April 29, 2026, Eka Robotics — a Cambridge, Massachusetts startup co-founded by MIT professor Pulkit Agrawal and former DeepMind researcher Tuomas Haarnoja — officially launched, unveiling its Vision-Force-Action (VFA) foundation model. Rather than teaching robots by imitating human movements, VFA relies on simulation and force sensing, aiming for superhuman capability, not merely human-level performance.

Force: The Native Language of Physics

The name "Eka" comes from Sanskrit — meaning "one" or "unity" — and from Finnish, where it means "first." The symbolism is intentional: Agrawal described the company's mission on X as building "intelligence for the physical world in its native language: forces."

The dominant paradigm in robotics in 2025–2026 has become Vision-Language-Action — VLA — models. They are used by Physical Intelligence with its π0 model and a growing cohort of startups attempting to connect text instructions to robot actions via visual perception. Eka argues that language in this context is a "helpful crutch" that bypasses the fundamental physical reality: a robot must feel how an object moves, how its mass changes, how grip begins to fail — not merely see and verbally understand it.

VFA adds force as a third channel alongside vision and action. In practice, this means tactile sensors in the grippers and a computational model that processes force data in real time, enabling reactive corrections. The company designed its own tactile grippers — it does not rely on off-the-shelf components.

Simulation Over Imitation

The key methodological distinction from competitors is the source of training data. Companies such as Rhoda AI and portions of Physical Intelligence's work rely on large datasets collected by humans wearing teleoperation gloves or captured by motion-capture cameras. Eka bets on simulation.

In a high-fidelity physics simulator, a robot can practice tasks for thousands of compute hours, autonomously generating solutions without human involvement. The company claims it has developed proprietary algorithms that effectively close the sim-to-real gap — one of the hardest problems in robotics over the past decade, in which behaviors learned in a virtual environment fail to transfer to real, unpredictable physical conditions.

The analogy the startup reaches for is AlphaZero from Google DeepMind — the system that learned chess and Go at superhuman levels purely by playing against itself, without historical data. Haarnoja, co-creator of the Soft Actor-Critic (SAC) algorithm, a cornerstone of modern reinforcement learning in robotics, brings exactly this expertise to the project.

Demonstrations: What the Robot Actually Does

In videos published by the company (filmed at 1/25 speed and in real time), three task categories are visible:

Precision assembly: The robot grasps a light bulb and screws it into a socket — a task requiring sub-millimeter precision and continuous force regulation. Too strong a grip shatters the bulb; too weak and it is dropped.

Improvisational sorting: The robot packs chicken nuggets into containers on a moving conveyor, including "tossing" items when the belt moves too fast. This represents a level of speed and adaptability previously reserved for human workers.

Tactile recovery: The robot detects that an object (a hairbrush, a plush keyring) is beginning to slip and corrects its grip in real time — without interrupting the motion.

None of these tasks were demonstrated with specially prepared, uniform objects. The company emphasizes generalization: the model is designed to work on unknown objects in unknown environments.

Context: The "Physical AI" Battlefield in 2026

Eka enters a well-populated segment. Physical Intelligence has raised over $400 million and published the π0 and π0.7 models, demonstrating "compositional generalization" — the ability to combine learned skills in new configurations. Generalist AI, Sunday Robotics, and over a dozen other startups are similarly scaling imitation data or real-world reinforcement learning.

Eka distinguishes itself on three axes: an explicit counterposition to VLA (not "we extend VLA" but "we replace it with VFA"), exclusive reliance on simulation as a data source, and a declared aspiration to superhuman capability rather than human imitation.

The absence of funding information is a conspicuous gap. The startup disclosed neither investors nor round size. Given the founders' profiles — Agrawal runs the Improbable AI Lab at MIT; Haarnoja authored SAC, which underpins many commercial robotics systems — a seed round from a leading VC is highly probable, but unpublished.

Why This Matters

Eka Robotics is not simply another startup claiming to have "solved dexterity." Its differentiator is methodological, not merely rhetorical. The bet on simulation as the primary data source has profound scaling implications: the cost of compute hours falls year over year, while the cost of collecting an hour of teleoperation data remains flat or rises. If the sim-to-real gap has genuinely been closed, Eka can generate training data orders of magnitude more cheaply than competitors relying on human demonstrations.

Adding force as a third perceptual channel attacks a genuine bottleneck in robotics: most current systems handle "pick and place" tasks well with uniform objects but fail with variable physical properties — different weights, textures, elasticities. VFA, if it performs as the demos suggest, shifts the competence point from "identify and grasp" to "feel and adjust" — a fundamentally different class of problem.

Whether the startup delivers on its promise will become visible at the first commercial deployments, or at the next funding round with disclosed performance data.

What's Next?

Eka has not announced a commercialization timeline or industrial partners. Observers expect a funding announcement within 3–6 months.

The key test will be external benchmarks: whether VFA outperforms VLA on standardized dexterity tasks such as established manipulation benchmarks.

The company is actively recruiting — job listings on Ashby cover robotics, ML, and simulation engineers.

Sources

Share this article