Innovation+Execution+Network=Value

EnLighting Views

Posts tagged ADAS
My First Autonomous Vehicle Ride: “Easy Rider” meets “My Mother the Car”

I had the opportunity in Frankfurt a few weeks ago in October to see and ride fully-electric autonomous shuttles on a geo-fenced portion of Mainkai along the Rhine River. As an advocate for ADAS and sensing suites including the roles for LIDAR and AI in the future, my wife and I were thrilled to happen upon this trial of an Easymile system. 

Easymile is a French firm, with 22M EUROs invested so far from corporate partners, and has put together an interesting set of trial and special purpose (airports) operating sites using various architectures of sensors and vehicles.  EasyMile states it has 200 systems deployed, and 150 employees in five offices (Toulouse, Berlin, Denver, Singapore and Melbourne). The company has established large-scale technical and commercial partnerships with prestigious industrial groups such as Alstom, Continental and TLD (a world leader in airport ground transportation). This site went live in September, kicked off by none other than Angela Merkel riding it.  The system uses some core technology from Germany’s tier 1 auto supplier Continental AG including LIDAR, ABS, short-range radar and a redundant brake system for safety.

The trials run in Frankfurt do have a safety rider/consultant from the German local partner of Easymile, but the routing is a fixed route with fixed planned stops via GPS programming for picking up and discharging passengers and using only LIDAR at this time for the safety breaking, though other systems it has deployed elsewhere have a mix of possible sensors and related communications that can include:

·       LIDARs (Laser Detection And Ranging)

·       Cameras

·       Radars

·       Differential GPS

·       Inertial Measurement Unit (IMU)

·       Odometry

·       Communication with its environment (V2X) and the supervision center via 4G data connection network.

We had a chance to watch the big Easy in action one day, making its methodical slow turns and stops within its geo-fenced area, which also allowed pedestrians and bikes in the area. Easy had banks of lights flash as a warning to on-comers if they were within a given distance, thus providing what should be a clear message, though if in India horn honking would have been the norm from 200 meters in. An issue we saw was how the LIDAR and software behaved such that even standing on the curb in the boarding area forced Easy to stop short if we were not at least 3 feet back off the curb. I thought of myself in NYC or near most any city intersection, or possibly scouting a jaywalking spot, stepping off the curb to see traffic coming on, maybe to make an early crossing ahead of a light change or in hailing a taxi, and envision ADAS cars stopping dead in their tracks and getting rear ended by a live driver. 

The next day we had enough time to get in line at a pick-up spot to hitch a ride—much like waiting for a bus or train. These particular vehicles in Frankfurt are designed for 6 seated passengers so we took our seats, our backs facing in the forward direction and within 20 seconds watched as the 3 passengers on the back seats went flying into the floor as Easy Rider slammed to a hard stop. A biker had run straight at it head on, taking evasive action at the last seconds, having a good laugh of having gamed the system. And he was just “playing”, not a Grubhub delivery biker in NYC willing to challenge the sides, fronts and backs of buses, cars and pedestrians for a bonus dollar and adrenaline rush. 

  In Frankfurt, in nice weather, we had seen both bikers and walkers play similar games the day before, but without such abrupt results. Our employee rider was more shook up than the people on the floor, but the experience points out some issues to deal with: (1) geo-fencing may need real fencing, (2) more real world use cases and sensor and software development is needed, (3) People will do stupid things, in a rush, or lack attention to good street traffic protocols, or in not infrequent cases just for kicks will mess with automation and other people...their Mother needs to yell at them when they even think to do something like that. That may be the ultimate Unicorn opportunity in AI applications and distributed sensing communications.

 

3D Image Sensing in Autonomous Mobility Systems – Revolution or Evolution?

In late June we were part of organizing and attended a great conference hosted by OSA-OIDA in San Jose, featuring a range of Tier 1 Auto suppliers, leading edge LIDAR firms, supporting systems and on-board data processing players and major autotech investors. “Should we be a Cheetah, or a Hyena”was a challenge to C-Level mobility sector teams posed by Alexei Andreev from Autotech Ventures.

The context was that while large amount of capital and companies have been funded the past few years to address various aspects of driver-assist and ADAS vehicles with a vision for driverless systems, the realization in the market has become clear the timeframe to large scale deployments is pushing out, the aspects of technology readiness, integration challenges and Big Auto and regulatory processes are going to be slower than thought. We’ve gone from euphoria of landing and feasting on big game easily, to experiencing the easy prey is not so easy and the trough of disillusionment is upon the sector. Yes, some LIDAR firms like Princeton LightWave and Innoluce have been taken under the wings of large players like Ford-Argo and Infineon two years ago, but in mid 2019 we are seeing and hearing of struggles of LIDAR firms to produce results that work in big volume and to raise yet more $, a few like Drive.ai closing shop, and Big Auto and their incumbent supply chain are stepping into the “Standards” game to create methods to validate vendors solutions, test sensor suite architectures, and push down TCO costs. But many of these efforts like the AVSC organized by SAE with Ford, GM and Toyota are only focused on L4/5. And with that, pushing out timeframes of adoption beyond mere tests and trials is the clear result. While some investors and vendors tout Level 4 and 5 solutions, a preponderance of real-world dialog is now around L2 and L2.5 or 2+/3. 

The talks by TE Connectivity and Aquantia highlighted that the auto infrastructure for moving and processing data from extensive sensor suites, in particular LIDAR, is a gating factor that the auto manufacturers will address but in their conservative manner. As of June 2019 there are only 1400 self-driving cars in on-road test in the US, run by 80+ firms, with Waymo having the major share. In contrast, Tesla with over 500,000 cars collecting data over a broad area, arguably has much more data for mining best of bred solutions and corner cases in real world situations. In late May at a Stanford University optical sensing workshop, the head of Ford’s Palo Alto R&D Center said they track over 80 LIDAR firms not including several in stealth or formation, plus several dozen radar and camera firms with the vast majority being suitable for Level 3 or lower. When asked about the timing for Level 4 and 5, the comment was for L5, it’s at least 25 years, maybe never; for Level 4 and getting the on-board bandwidth to handle it within latency needed – maybe 5-10 years as some stock analysts have opined recently.  So while VC backed firms burst from the gates like Cheetahs drooling for the big game to live off of for years, the near term strategy may call for picking some bite size pieces to feast on in viable application spaces. 

We are optimistic that LIDAR will play a material role in mobility sensing solutions, in and beyond auto. Having been involved for 20+ years in the optical component and systems sector as an investor, advisor and founder, including having roles with a successful LIDAR innovator in Princeton LightWave, we are clearly in inning 1 of a long game. One colleague from the OSA-OIDA Optical Sensor planning committee did some calculations based on major choices of components that go into a LIDAR unit and choices for market focus and manufacturing processes, estimating there are over 1000 product design configurations. While half of these won’t match a best cost or market need, it is worth noting that of the estimated 100 LIDAR firms known today, they don’t yet scratch all possible viable design-market alternatives. Wave 1 of the technology and companies will be followed by another wave.  Many of these Wave 1 firms are hunting in the same fields, or without adequate fuel to get to speed, trying to be a Cheetah. Several LIDAR firms have reached beyond the auto big game to find valuable pockets of opportunity in industrial, avionics and security sensing. As in all early perceived hyper-growth market opportunities, when reality hits, some firms will go to the graveyards, some will narrow their hunting focus near term, and some will merge to broaden their hunting base so that they can morph into Hyenas to live and maybe return as a future Cheetahs.

On behalf of OSA, the organizers Rob Murano of II-VI, Sabbir Rangwala of Patience Consulting and John Dexheimer of LightWave Advisors, we thank the June 27 speakers representing AEYE, Analog Devices, Aquantia, Autotech Ventures, Fabrinet, Finisar, Gener8, Guangzhou Automotive Corp, North American Lighting (Koito Group), Ouster, Robert Bosch, Rockley Photonics, Samtec,Spatial Integrated Systems, TE Connectivity, Tetravue, Tractica, Veoneer. https://www.osa.org/en-us/meetings/osa_meeting_archives/2019/oida_forum_on_optics_in_autonomy/


Next issues:

·       Advice to Entrepreneurs on Corporate Venture Capital

·       Playing the LIDAR Match Game

·       Standards Initiatives for Auto ADAS –Status and Implications

·       Driving Toward Chip-Scale LIDAR