Location: Edinburgh, Scotland
Team: Edge AI Group, Analog Devices Inc. (ADI)
Analog Devices' Edge AI team is building the next generation of intelligent sensing systems that reshape how machines perceive and interact with the physical world. From deeply embedded algorithms to scalable AI applications, we work across the full stack to fuse advanced hardware with real-time intelligence. Our mission: bring AI to the Edge — where decisions meet the real world.
We are looking for a Principal Engineer with deep domain expertise in visual odometry, sensor fusion, SLAM, and AI-based perception to lead the development of real-world localization and mapping solutions. You will work at the intersection of embedded sensing, robotics, and machine learning, helping create systems that can understand, navigate, and adapt to their environments autonomously — in factories, warehouses, vehicles, and beyond.
You will join a high-impact team developing foundational components for ADI's next wave of perception-enabled solutions. This is a hands-on leadership role for someone who thrives on solving open-ended challenges, defining technical roadmaps, and mentoring other engineers.
Responsibilities
Lead the design and deployment of SLAM, VIO, and multi-sensor fusion systems that enable robust, real-time mapping and localization across diverse environments.
Drive algorithm development and evaluation for perception tasks, including pose estimation, depth reconstruction, loop closure, map optimization, and semantic understanding.
Architect AI pipelines that combine classical and learned methods (e.g., CNNs, transformers, foundation models) for environmental understanding at the edge.
Develop real-world and simulation-based datasets for benchmarking and validation; guide sensor selection and system integration.
Collaborate cross-functionally with embedded, hardware, and systems engineers to bring scalable solutions from research to production.
Stay current with the state of the art in robotics, SLAM, Edge AI, and self-supervised learning — and guide the team in adopting innovative technologies.
Contribute to the broader AI platform architecture, CI/CD pipelines, and MLOps practices.
Qualifications
10+ years of experience in robotics, computer vision, or AI, including 5+ years in SLAM, VO, or sensor fusion; with 3+ years in a technical leadership role.
M.S. or Ph.D. in Robotics, Computer Science, Electrical Engineering, or related field.
Demonstrated expertise in at least one of the following:
Visual-inertial odometry (VIO)
Multi-sensor fusion (camera, LiDAR, IMU, encoders)
3D SLAM and mapping in dynamic environments
AI-based perception models for real-time localization
Proven track record of deploying perception algorithms in real-world systems (e.g., autonomous robots, drones, AR/VR, self-driving platforms).
Strong programming skills in Python and C++, with experience in frameworks like PyTorch, ROS, g2o, or Ceres.
Comfortable working with real-time systems, embedded platforms, or simulators such as Gazebo, Isaac Sim, or Unreal Engine.
Familiar with MLOps and software best practices: CI/CD, containerization (Docker), orchestration (Kubernetes), etc.
Excellent communicator and collaborator — capable of leading multi-disciplinary teams and influencing stakeholders across hardware, software, and business domains.
Bonus Experience (Nice to Have)
Experience with self-supervised learning, foundation models for robotics, or transformer-based perception architectures.
Familiarity with industrial domains: factory automation, AMRs, predictive maintenance, etc.
Prior contributions to open-source projects (e.g., OpenVINS, ORB-SLAM3, DROID-SLAM, RTAB-Map, Cartographer).
Understanding of sensor design, signal processing, or embedded AI hardware platforms.
Why Join ADI?
Join us in shaping the future of edge intelligence — where your work bridges the gap between sensing and understanding, and where you'll help machines not just gather data, but act meaningfully in the world.