A bicycle or pedestrian crossing the road, stopped vehicles, and debris are among the navigational challenges driverless autonomous vehicles must be able to sense and respond to, day or night, and in any weather condition. Software company Nodar is developing a solution that it says provides better target identification and at further distances than much of the technology in the market today.
Advanced 3D sensing is at the core of the future of autonomous vehicles, both passenger cars and commercial trucks. At Nodar, the founders say they have developed a camera-based solution that has more resolution and provides greater range and insight than lidar.
Even though Nodar currently is focused on working with OEMs and Tier 1 suppliers in the automotive industry, they already see how their same technology can also benefit heavy-duty truck manufacturers and fleets.
Leaf Jiang, Nodar’s CEO and co-founder, started working with lidar at MIT’s Lincoln Laboratory more than 13 years ago. But, he now thinks a two-camera solution provides better “vision” for autonomous driving, plus ADAS (advanced driver assistance systems) as well.
Brad Rosen, Nodar COO and co-founder, says over the next decade he expects there will be a quarter of a billion cars that are self-driving Level 3 autonomous vehicles.
“At the heart of all of those vehicles is the perception system, and really at the core of that is always going to be vision systems. Cameras are going to be a part of this,” says Rosen. “So, we've doubled down on cameras, and we think that cameras are the way to deliver self-driving cars into the future.”
Hammerhead Software for Long-Range Vision
Nodar’s software solution, the Hammerhead product line, provides any-object detection with long-range and high-resolution 3D data. It offers long-range vision at more than 1,000 meters (3,280 feet), can detect objects as small as 10 cm (3.9 inches) at 150 meters (492 feet), and something like an overturned motorcycle at 350 meters (1,148 feet).
Jiang says Nodar provides a combination of untethered stereo cameras, auto-calibration, and object detection with precise and reliable depth sensing and scene analysis — even at night and in low-visibility weather conditions.
One way of triangulating distance to everything in a scene is by comparing the left and right images. However, cameras are very sensitive to relative alignment.
Rosen points out that there are two-camera systems that are already in the market, using Subaru’s EyeSight driver assist technology as an example. But the cameras are close together. Wider camera placement, he says, is more advantageous — but with that comes the challenge of keeping them perfectly aligned.
That is where Nodar comes into play.
"What we've done at Nodar is to untether the cameras [and] do the alignment of the cameras in software, which is enabled by these amazing Nvidia processors and the incredible camera technology that has evolved and our patented algorithms," Rosen explains. "We untethered the cameras and we can mount them pretty much anywhere on the car."
The software is compatible with off-the-shelf cameras that can be mounted on vehicles in a variety of configurations.
Nodar does not provide the cameras, instead, it offers a software solution that makes sure the cameras are properly aligned and measuring distances. The recent product launch of its GridDetect completes the package and makes the dot cloud of data more presentable to the end user.
Algorithms, not AI
GridDetect uses single processing and algorithms instead of artificial intelligence.
Why is that method better than AI?
Jiang says an AI approach requires training a system. You basically have a whole bunch of data, you put that data in, and you train your network.
Algorithms do not have to be trained. He says that means the system better can recognize unique objects versus an AI system having been trained to detect specific objects.
“The reason why we do a signal processing approach is it's basically algorithms and we know how they work. It's not trained on data. It's based on physics,” Jiang says. “We know how it's going to behave, even if it sees an image it's never seen before. So, if there's something in the middle of the road, an inflatable duck, I don't know, we can deal with that.”
The output is a point cloud, similar to what lidar produces, he explains. Objects that are close are displayed in red, while further away objects are displayed in blue. GridDectect also indicates if objects are moving toward or away from the vehicle.
“Our whole safety thesis is that the farther you can see the more reaction time you can have,” Jiang says.
“Our system is 30 times more effective at night than lidar. So, we're getting two to four times the range, and 30 times the amount of data, as compared with the lidar at roughly five to 10 times lower cost,” Rosen adds.
Extended Long-Range Sensing for Autonomous Vehicles
Both Rosen and Jiang tout the extended range as a vital safety feature.
Rosen says GridDetect provides long-range sensing, ultra-precise that is able to detect a 10-cm object at 150 meters (492 feet), 12-cm at 172 meters (236 feet), and a tire at 250 meters (820 feet).
“These are kind of unheard-of measurements,” Rosen says.
Nodar rented an airport for testing to make sure the roadway, or pavement, was flat. They checked to see if the system could pick up 10 different objects at varying distances. The smallest, a 12-cm target, was recognized by GridDetect at 172 meters (236 feet). GridDetect picked up larger objects, like traffic cones and cars, at 500 meters (1,640).
UN's Challenging AV Standards for Europe
In June 2022, the United Nations adopted an amendment to Regulation No. 157, which set the standard for autonomous driving in certain traffic environments in Europe. Previously, the standard expectation was that sensing systems needed to detect objects at range while traveling at 60 kmh (37.2 mph). The amendment increased the speed to 130 kmh (80.7 mph).
“But the problem is, no one's been able to deliver that," Jiang says. "The reason being, once you're going that fast, you have to see at least 150 meters ahead to deal with any sort of obstacles that might be in your way.
“So, we've built the software and it allows us to have this bird's eye view to see up to 250 meters away for these type of objects and tells us where they are. And also tells us where they're going.”
Nodar for Heavy-Duty Trucks?
“We love the truck, because it's so big,” Jiang says. That's because when cameras are placed further apart, the detection range is increased. A truck would allow a 3-meter baseline for spacing the cameras wider, he says.
“That allows us to take the numbers that we showed here and multiply by about 1.5, and our system gets better and better as a baseline gets longer,” Jiang explains.
Nodar is not overlooking the heavy-duty truck market, and Jiang points out the company’s board of directors includes some from the trucking industry, including Roger Nielsen, former CEO of Daimler Trucks North America.
0 Comments
See all comments