Founding AI Research Engineer - Robot Learning

Origin

San Francisco, CA

JOB DETAILS
SALARY
$100,000–$300,000
SKILLS
Algorithms, Artificial Intelligence (AI), Budgeting, CUDA (Compute Unified Device Architecture), Construction, Data Collection, Debugging Skills, GPU (Graphics Processing Unit), Infrastructure Construction, Manufacturing, Memory Hardware, Metrics, Middleware, Modeling Languages, Network Operations Center, Open Source, Prototyping, Publications, Python Programming/Scripting Language, RSS (RDF Site Summary), Robotics, Simulation, Surface Modeling
LOCATION
San Francisco, CA
POSTED
30+ days ago

About Origin

Origin is building Physical AI for the built world - starting with autonomous robots for Interior Construction. We are building Construction Action Models which allows our modular robots to learn, adapt and work in unstructured construction job site.

Our robots are already deployed on live sites in New York City, helping accelerate schedules for large-scale commercial projects while improving safety and predictability on the job site. Backed by Tier-1 investors, Origin is working to close the gap between America’s surging demand for housing, data centers, and manufacturing infrastructure, and the construction industry’s growing labor shortage.

The Role

Our system runs a Multi Agent Action Expert architecture: classical precision algorithms orchestrated alongside learned policies. The job is systematically expanding the learned components while keeping the system production-safe. You own the full lifecycle of learned components on OG-1: from data collection and model training through edge deployment on Jetson AGX Orin. Every research project will have a deployment milestone. This is not a lab position.

What You Will Do

  • Train and deploy VLA models for contact-rich manipulation using our imitation learning infrastructure.
  • Build the data flywheel: teleoperation pipelines (GELLO, SpaceMouse, VR), DAgger-style online correction, demonstration curation.
  • Research and prototype world models for surface state prediction, spray dynamics, and anomaly detection.
  • Design offline evaluation metrics that predict real-world finishing quality before deployment.
  • Optimize models for edge: TensorRT compilation, latency profiling, memory budgeting on dual Jetson AGX Orin.
  • Design the interface where learned policies propose actions and deterministic safety layers enforce constraints.

Requirements

  • BS/MS/PhD in CS, Robotics, ML, or equivalent experience shipping learned systems on physical robots.
  • Strong Python and PyTorch; comfort modifying research codebases (you'll work directly with open-source VLA implementations).
  • Experience in at least two of: imitation learning, RL, vision-language models, robot learning from demonstration, sim-to-real.
  • Track record deploying ML on real hardware: not just training to convergence, but debugging why the policy fails on the actual robot.
  • Working knowledge of ROS2 or equivalent robotics middleware.
  • Experience working with Simulation Systems like Isaac Sim.
  • GPU profiling and optimization (TensorRT, ONNX, CUDA); you understand why 200ms policy latency kills contact control.

Strong Plus

  • Hands-on with VLA architectures (0/0.5, OpenVLA, RT-2, Octo) or foundation model fine-tuning for robotics.
  • Teleoperation data collection and DAgger/HG-DAgger pipelines.
  • World model architectures (DreamerV3, V-JEPA, latent dynamics models).
  • Construction, manufacturing, or contact-rich industrial domains.
  • Publications at CoRL, RSS, ICRA, NeurIPS: valued but equivalent shipped work counts.

About the Company

O

Origin