Apple has developed a new method of using machine learning to convert the raw cloud data obtained through LiDAR arrays to information that includes the detection of 3D objects, including bicycles and pedestrians, without additional sensor data. The research was published in Cornell University’s arXiv open directory of scientific research.
The research paper gave some insight into the progress that Apple has made when it comes to self-driving technology. Apple has already obtained a self-driving test permit from the California Department of Motor Vehicles, and its test car has been spotted in on the roads.
The technology company has made considerable progress in machine learning and has published papers on its own blog focusing on this research and sharing its findings with the broader research community. The articles reveal how Apple’s team of researchers, including the authors of the paper on nnew techniques for machine learning, Yin Zhou and Oncel Tuzel, created something called VoxelNet that can extrapolate and infer objects from a collection of points captured by a LiDAR array. LiDAR basically works by creating a high-resolution map of individual points by emitting lasers at its surrounding and registering the reflected results.
Based on this research, LiDAR could become much more effective on its own in self-driving systems. Generally, the LiDAR sensor data is used in conjunction with information from optical cameras, radar and other sensors to create a complete picture and perform object detection. Using only LiDAR with a high level of confidence could lead to lead to production and computing efficiencies in the long run in actual self-driving cars.