Testing aerodynamic add-ons on a track allows testers to monitor and control for a number of variables such as wind, driver performance, equipment conditions and more.  Photo: Richard Wood

Testing aerodynamic add-ons on a track allows testers to monitor and control for a number of variables such as wind, driver performance, equipment conditions and more. Photo: Richard Wood

Fleets wanting to invest in aerodynamic devices to save fuel are faced with a problem: Should they rely on manufacturers’ published test results? Is the answer the Environmental Protection Agency’s SmartWay verification process? Or should they conduct their own testing to determine how well these products perform?

Several large and sophisticated fleets tell HDT that they routinely discount manufacturers’ fuel-savings claims by 50%. Then they base their ROI expectations on the downwardly revised number.

When Bruce Stockton was the vice president of maintenance at Con-way Truckload, he was an aggressive tester of aero technologies. Not all of what he tested made its way into fleet service.

“Even if the manufacturer promised 5% or 6%, we looked at it and made our own assumptions,” he says. “We’d install several devices and then run some tests and make our final decisions based on own our data. I’ll say that a lot of product failed to meet expectations, but much of what we did eventually install was still something less than what the manufacturer had claimed. Fifty percent would not be an inaccurate number.”

Stockton is not alone. Equipment executives with several large fleets, who did not wish to be quoted, confirm that their in-house testing produced results that were usually about half of what the devices’ manufacturers had claimed.  

Real-world evaluations conducted by fleets will generally produce different results than tightly controlled tests because of the various uncontrolled conditions encountered out on the road. But theoretically speaking, tests such as those set up by the Society of Automotive Engineers, the Technology & Maintenance Council of the American Trucking Associations, and the EPA should provide a certain level of standardized comparison.

The average buyer of aerodynamic add-ons can be fooled by visual impressions. What may appear to the untrained eye to be an effective tool in reducing aerodynamic drag may not be very effective after all. Testing should help fleets address that issue, but test results can be misleading if you don’t understand things like the uncertainty factor – and can be over-reported or even fudged by knowledgeable experts.

HDT has no data to confirm this is actually the case, but aerodynamicist Richard Wood, past chairman of the Society of Automotive Engineers’ Truck and Bus Aerodynamic and Fuel Economy Committee, claims there are items on the EPA’s SmartWay list that should not be there.

“There are several products on the EPA SmartWay verification site that – on aerodynamic fundamentals alone – simply don’t work,” he says. “EPA says they can’t do anything about it because the manufacturers supplied the agency with test data that says they do work. I have even done some computer modeling for them. I showed in one case that a particular device simply cannot cause the kind of improvement that it claimed it did. They are not valid products, but EPA will not take them off the list.”

Responding to that point and others, Jennifer Colaizzi, press officer for the EPA’s Office of Media Relations, says that:

“SmartWay continues to receive strong feedback from fleets and manufacturers that the verification programs help fleets filter performance claims and find technologies that save them fuel.

“In most cases,” she continues, “for each description of a technology performing at a lower level than expected, there is another fleet that describes getting better than expected fuel savings. Manufacturers claiming inflated fuel savings levels above the level described on the SmartWay technology page are not verified by EPA. If EPA receives supportable data on a verified device achieving questionable performance, EPA will review the data and engage the manufacturer. After review with the manufacturer, EPA could pursue removing a product from the list.”

However, apparently no “under-performing” product has yet been excised from the list.

In the interests of full disclosure, Wood says he has no skin in the manufacturing and sales game, just an abiding interest in seeing good product put on the market that will benefit both the industry and the environment.

“I don’t manufacture anything and I never have,” he states. “I invented several aero technologies under grants from the State of Virginia, licensed them, and put them into SmartWay. My interest in this testing question as a taxpayer is that you have EPA doing bad engineering and science, or modifying what should be done to appease some test sites or manufacturers. It’s not serving industry or the public well.”

It should be noted that SmartWay does not conduct any of its own testing as part of its verification process. Test results are submitted by manufacturers in order to get their products onto the SmartWay-Verified list, an obvious marketing advantage.

“EPA’s SmartWay program conducts research on fuel-saving technologies to evaluate performance and test methods,” Colaizzi says. “Due to diversity of operating conditions and tractor-trailer combinations, EPA testing focuses on performance and evaluating potential sources of variability in operation and test methods.”

Although the program is part of the EPA, it is strictly a voluntary program. SmartWay procedures do not go through the usual regulatory approval process.

Smaller devices, like this air dam on a Freightliner Cascadia, provide smaller percentages of improvement that can be difficult to detect in tests that are not carefully controlled.  Photo: Jim Park

Smaller devices, like this air dam on a Freightliner Cascadia, provide smaller percentages of improvement that can be difficult to detect in tests that are not carefully controlled. Photo: Jim Park

In the real world

Testing the effectiveness of an aerodynamic drag reduction device is not as straightforward an exercise as testing the rolling resistance of a tire might be. To begin with, a number of external factors can complicate the interpretation of the data, including the testing process itself. 

There are four types of aerodynamic evaluation tools in use today:

  • Wind tunnels
  • Coast-down testing
  • Computational fluid dynamics (CFD)
  • On-road or track testing, usually done using an SAE J1321 test protocol.

We also see fleets doing in-service testing following various accepted test protocols such as the Technology & Maintenance Council of the American Trucking Associations’ Recommended Practices 1102 (Type II), RP 1103 (Type III) or RP 1109 (Type IV) to evaluate products before making a big investment in such technology.

In-service testing can provide very usable results for a particular fleet’s operation, because the evaluation is done using its actual trucks, often on revenue runs. It is the real world. However, it may take several months to do a proper in-service evaluation.

Bob Wessels, a retired engineer and fuel economy testing expert with Caterpillar who now works independently as a consultant, says a host of factors can influence fleet evaluations, but their influence can be marginalized when the test is conducted over an extended period rather an a week or two.

“If you can run several trucks in side-by-side comparison for several months, the daily inconsistencies in wind and weather, fuel top-ups, driver skill, miles run, terrain, etc., diminish,” Wessels explains. “If you’re diligent in your data collection you’ll get a fair idea how a device will perform for you.”

That opinion is shared by Marius-Dorin Surcel, technical leader of Performance Innovation Transport at FPInnovations, a consulting firm near Montreal, Canada, that tests and evaluates fuel saving technologies for its fleet members and various industry sectors.

“Type III or Type IV, done correctly, will provide a useful result for that operation,” he says. “The longer you run an in-operation test, the more accurate you should be. But there’s also more opportunity for variation to creep into the test. Different drivers, different loads, etc. Technically, you should run the test for as long as possible, but at the same time, the shorter the better to reduce the introduction of variation.”

Fleets looking for validation that a chosen product will work in their applications can rest assured that when tested as above, the results will be valid and applicable to their operation – at least under similar equipment, load and environmental conditions.

Aerodynamic add-ons come in many sizes and shapes. Not all devices work as well as others. Precision testing helps to separate what might provide a good ROI from what might not.  Photo: Jim Park

Aerodynamic add-ons come in many sizes and shapes. Not all devices work as well as others. Precision testing helps to separate what might provide a good ROI from what might not. Photo: Jim Park

Variability and uncertainty

If you can’t wait several months to confirm a product’s effectiveness, you can always rely on the results of an SAE J1321 fuel economy test – or can you?

Some manufacturers base their testing on SAE’s J1321 protocol, but the results of those tests may not tell you everything you need to know.

For instance, you, as a consumer, probably won’t know how precisely the test was controlled for variability. Without proper controls, published results could be off by several percentage points one way or the other. A device promising, say, 5% improvement in fuel economy with a “margin for error” of up to 3% does not inspire a great deal of confidence.

When the manufacturer of an aerodynamic add-on device claims that using the product will get you a 5% improvement in fuel economy, can you trust that number? Based solely on that claim, no. In the first place, you don’t know what starting figure the percent improvement is based on. Secondly, you don’t know the uncertainty value of the test results.

For example, if the test results showed a 5% improvement with a variation in the test-truck-to-control-truck ratio (T/C ratio) of +/- 2%, your actual improvement could be as low as 3% or as high as 7%. The lower the T/C ratio the better – but you don’t often see those numbers noted in marketing literature. Sometimes they are absent from the test results, too. Sometimes they are presented along the lines of “such and such results are said to be accurate to plus or minus 2% 95% of the time.” Engineers call that an uncertainty value.

If a buyer knows that the uncertainty value of a test result was +/- 2% as opposed to +/- 5%, he or she could put more trust in the product with the lower uncertainty value.

This variability in the reporting of test results reflects the relative imperfectness of the test procedure itself. While the SAE J1321 test is an industry standard, there have been several revisions, each one tightening up on its predecessor. Within the test procedure are the formulae for calculating the uncertainty values, and these depend on the differences testers see from one test run to the other and the differences between the test and the control vehicles.

EPA’s SmartWay program does not publish actual test results or the margin of error, the so-called uncertainty value.

Big bold changes, like these pontoon-like side-skirt extensions on Peterbilt’s SuperTruck, could save thousands of dollars in fuel costs, but the investment decision would require some proof of payback over what the devices cost.  Photo Jim Park

Big bold changes, like these pontoon-like side-skirt extensions on Peterbilt’s SuperTruck, could save thousands of dollars in fuel costs, but the investment decision would require some proof of payback over what the devices cost. Photo Jim Park

Is ‘similar’ good enough?

Criticisms have been leveled at SmartWay verification methodology in the past, but some of that stemmed from weakness in the older version of the J1321 test. For example, that test protocol allowed the use of “similar” trucks, not identical trucks.

Two “similar” trucks could include an International ProStar and a Freightliner Cascadia, for example. Both are aerodynamic long-haul trucks, in many cases equipped with sleepers, side fairings, cab extenders, etc. They could have a “substantially similar” drivetrain and powertrain configuration, even though one might be a Cummins and one might be a Detroit. The point is, the older J1321 test used by EPA’s SmartWay program does not demand the trucks be exactly the same. How can the results of the test be attributed solely to the fuel-saving device being tested, when the engine performance and aerodynamic drag could be different from one truck to the other?

Wood says a test engineer who understands that weakness could use a less-efficient truck with lower aerodynamic drag as the control vehicle, while using the more efficient truck with higher aerodynamic drag as the test vehicle.

The difference in engine performance and aerodynamic drag could result in the two vehicles having nearly identical performance in low wind conditions but significantly different performance in high winds. This effect, whether on purpose or by accident, can minimize or maximize the tested benefit from a fuel-saving technology.

“The point is, under the old protocol, you would not have to report the difference, and therefore the difference in the tractors’ efficiency could be reported as part of the results of the device you tested,” Wood explains.

EPA responds to that criticism by saying, “Historically, EPA has encouraged manufacturers to use ‘identical’ trucks. We did not explicitly require it because there will be times in which identical trucks are simply unavailable. In such cases, it is necessary to work with the device manufacturers using similar trucks to produce viable results for their technologies.

“As we update our track test verification pathway, we are evaluating the benefits and costs of requiring the use of identical tractors and trailers,” the agency notes. “Industry-developed procedures such as the SAE J1321 typically incorporate flexibility at the request of fleets and manufacturer.”

Wood, however, contends that “a true SAE J1321 test has no built-in flexibility to accommodate anyone’s needs – not the fleet, the manufacturer, nor the EPA.”

To illustrate the potential for inconsistent test results, Wood looked at previously published test data for a trailer skirt claiming a 6% improvement. He found errors of +/- 3%. Further, in samples Wood shared with HDT, he analyzed test data from 24 sets of published valid test results obtained on four different test tracks:

  • 10 of the test results were determined to be statistically invalid because the percentage of uncertainty was greater than the reported result.
  • 14 tests were deemed statistically valid because the reported result was greater than the calculated uncertainty.
  • Only 6 out of the 24 sets of results he reviewed met one of the premiere requirements for the EPA version of SAE’s J1321 test – a margin of error (uncertainty value) of less than 1%.
  • The average uncertainty across the 24 tests was +/- 1.8%; the highest was +/- 3%.

“The point to all this is when a fleet is going to buy an aero device, they need to know exactly what the uncertainty is,” Wood says. “Anyone can claim their skirt [saves] 6%, but if one skirt is +/- 1% and the other is +/- 4%, the fleet might think twice about the skirt with the 4% variability – if they knew it was there.”

Wind tunnels can be used to simulate crosswind conditions. Here a 6-degree yaw angle represents a 6-degree crosswind. The pink shading illustrates  the greatest crosswind impact.

Wind tunnels can be used to simulate crosswind conditions. Here a 6-degree yaw angle represents a 6-degree crosswind. The pink shading illustrates the greatest crosswind impact.

Changes coming

At the February annual meeting of the American Trucking Associaitons’ Technology & Maintenance Council, Sam Waltzer, SmartWay technology team leader for heavy vehicle aerodynamics, announced upgrades to the SmartWay testing program. These include a new-version J1321 test, plus protocols for on-road, wind tunnel, coast-down and computational fluid dynamics testing.

Manufacturers have the choice of selecting one of the three tests, Waltzer said, but “they have the option, and we hope there’s the incentive, to test the same device using multiple test methods,” he explained. “Each test method has its strengths, but you have to understand what kind of information you are getting, and how that can be applied to the selection process.”

In the next installment of this series, we’ll look at the changes to SmartWay’s testing protocol – and potential shortcomings fleets need to understand in evaluating test results.

About the author
Jim Park

Jim Park

Equipment Editor

A truck driver and owner-operator for 20 years before becoming a trucking journalist, Jim Park maintains his commercial driver’s license and brings a real-world perspective to Test Drives, as well as to features about equipment spec’ing and trends, maintenance and drivers. His On the Spot videos bring a new dimension to his trucking reporting. And he's the primary host of the HDT Talks Trucking videocast/podcast.

View Bio
0 Comments