
Mar 9, 2026
MoDE-VLA
Sharp's VLA robotic model designed for contact-rich, bimanual manipulation tasks using vision, language, force, and touch.
Technical specification
Context window
Parameters
License
Tools
Fine-tuning
Last updated: Mar 12, 2026
Modalities
Input
Text
robot vision
Robot sensors
Robot state data
Output
Robot actions
Robot commands
Manipulator control
Motion trajectories
Capabilities
4Reasoning★
Reasoning
Planning★
Planning
Image understanding★
Vision
Multimodal understanding★
Multimodality