Environmental Perception Engineer - Edge AI Specialist
At the forefront of edge intelligence, we're pioneering innovative sensing systems that revolutionize how machines interact with and understand the physical world. From embedded algorithms to scalable AI applications, our team works across the full stack to fuse advanced hardware with real-time intelligence.
We're seeking an exceptional Environmental Perception Engineer to lead the development of cutting-edge localization and mapping solutions. You'll be working at the intersection of robotics, computer vision, and machine learning, helping create systems that can navigate, adapt, and understand diverse environments autonomously.
As a key member of our high-impact team, you'll drive technical roadmaps, mentor engineers, and collaborate cross-functionally to bring scalable solutions from research to production. Your expertise in SLAM, VIO, and multi-sensor fusion will enable robust, real-time mapping and localization across various environments.
- Lead the design and deployment of SLAM, VIO, and multi-sensor fusion systems for robust, real-time mapping and localization.
- Drive algorithm development and evaluation for perception tasks, including pose estimation, depth reconstruction, loop closure, map optimization, and semantic understanding.
- Architect AI pipelines combining classical and learned methods for environmental understanding at the edge.
- Develop datasets for benchmarking and validation; guide sensor selection and system integration.
- Collaborate with engineers to bring scalable solutions from research to production.
- Stay current with advancements in robotics, SLAM, Edge AI, and self-supervised learning, guiding the team in adopting innovative technologies.
Key qualifications include:
- 10+ years of experience in robotics, computer vision, or AI, with 5+ years in SLAM, VO, or sensor fusion, and 3+ years in a technical leadership role.
- M.S. or Ph.D. in Robotics, Computer Science, Electrical Engineering, or related field.
- Demonstrated expertise in visual-inertial odometry (VIO), multi-sensor fusion (camera, LiDAR, IMU, encoders), 3D SLAM and mapping in dynamic environments, AI-based perception models for real-time localization, and proven track record of deploying perception algorithms in real-world systems.
- Strong programming skills in Python and C++, with experience in frameworks like PyTorch, ROS, g2o, or Ceres, and familiarity with real-time systems, embedded platforms, or simulators like Gazebo, Isaac Sim, or Unreal Engine.
- Experience with self-supervised learning, foundation models for robotics, or transformer-based perception architectures is a plus.
Join us in shaping the future of edge intelligence