Job Description
The ideal candidates will work with our Perception team to develop algorithms for detection, recognition, tracking and understanding of all relevant objects in the surrounding of the vehicle. By combining data from different perception sensors, we would like to ensure the perception system generates a precise, temporally consistent representation of the environment.
Responsibilities:
- Develop sensor fusion algorithms for Autonomous Driving
- Develop 2D/3D object detection/tracking algorithms by fusing Lidar/Vision data
- Work with large dataset, curate tests and metrics to continuously iterate/improve performance
Requirements:
- Ms. or Ph.D. in related fields (i.e. Signal Processing, Controls, Computer Science, etc.)
- 3+ years of experience of developing multi-sensor fusion system for AD/ADAS, familiar with Camera/Radar/Lidar object detection/tracking
- Background of state estimation theory, e.g., linear/nonlinear filter
- Understanding of data association algorithm and tracking algorithm using Bayesian filters (KF, EKF etc)
- Prior experience using deep learning models like CNN/RCNN