– Improved the Object Detection Network for photon count video streams and retrained all parameters with the latest self-labeled datasets (with special emphasis on low visibility scenarios). Improved the architecture for better accuracy and latency, higher remote vehicle recall, lower vehicle crossing speed error by 20%, and improved VRU accuracy by 20%.
– Converted the VRU Velocity network to a two-stage network, which reduced latency and improved pedestrian crossing speed error by 6%.
– Converted the Non VRU Attributes network to a two-stage network, which reduced latency, reduced incorrect lane assignments of passing vehicles by 45%, and reduced incorrectly parked predictions by 15%.
– Reformulated the autoregressive Vector Lanes grammar to improve lane precision by 9.2%, lane recall by 18.7%, and fork recall by 51.1%. Includes a full network update where all components are trained with 3.8 times the amount of data.
– Added a new “road markings” module to the Vector Lanes neural network, which improves lane topology error at intersections by 38.9%.
– Improved the occupancy mesh to match the road surface instead of the ego for improved detection stability and better hilltop recall.
– Reduced the generation time of candidate trajectories by about 80% and improved smoothness by distilling the expensive trajectory optimization procedure into a lightweight scheduling neural network.
– Improved decision-making for short-term lane changes around grays by more richly modeling the trade-off between going off-lane and the trajectory required to travel through the gray area.
– Reduced false decelerations for pedestrians by using a better pedestrian kinematics model
– Added control for more accurate object geometry as detected by the common occupancy grid.
– Improved handling of vehicles that deviate from our desired path by better simulating their turning / lateral maneuvers, thus avoiding unnatural decelerations
– Improved longitudinal control by going around static obstacles looking for possible vehicle movement profiles
– Improved longitudinal control plane for vehicles running in high relative speed scenarios, also taking into account relative acceleration in trajectory optimization
– Reduced the latency of the best-case object’s photon control system by 26% through adaptive scheduler scheduling, trajectory selection reconstruction, and parallelization of perception computation. This allows us to make faster decisions and improve response time.
– Fundamental support for model-parallel neural network inference was introduced by exchanging intermediate tensors between SOCs to improve the consistency of road edge and road line prediction through changes to the TRIP compiler, inference runtime, and interprocessor communication layer.
– Improved traffic control behavior at busy intersections by improving the logic of communication between traffic lights and intersections.
Click the “Record Video” button on the top bar interface to share your feedback. When clicked, your vehicle’s exterior cameras will share a VIN-linked Autopilot Snapshot with Tesla’s engineering team to help improve FSD. You will not be able to view the clip.