Trouble viewing this email? View in web browser

Friday, Sep 22, 2023
By Leslie D'Monte

Why driverless cars may still need a human hand; Generative AI may help

In a setback to the cause of driverless cars, the California Senate passed a bill on 11 September mandating that a trained human safety operator be present when a self-driving, heavy-duty vehicle operates on public roads in the state. Ironically, the move defeats the very purpose of having a driverless vehicle. And in this specific case, it effectively bans driverless autonomous vehicle (AV) trucks.

The bill, however, awaits a nod from California governor Gavin Newsom before it becomes law. But if the bill is passed, the legislation would ban self-driving trucks weighing more than 10,000 pounds (4,536 kg), at least until 2029. These would include vehicles from UPS delivery trucks to massive semi-trucks.


Self-driving trucks, also known as autonomous trucks, operate without human input, aided by sensors such as Lidar (light detection and ranging), radar, cameras, ultrasonic, GPS (global positioning system), and complex AI algorithms. While supporters of the bill believe its passage will help address concerns about safety and losing truck driving jobs to automation in the future, those opposing the bill argue that it will not increase safety and will hinder the development of technology in California. The bill also requires the California Department of Motor Vehicles (DMV) to provide evidence of safety to policymakers and submit a report evaluating the performance of AV technology by 1 January 2029, or five years after testing begins.

Waymo’s autonomously driven Jaguar I-PACE electric SUV Picture Credit: Waymo

In India, minister for road transport and highways Nitin Gadkari has adopted a similar stance when speaking about driverless vehicles. He has often said that driverless cars will not be allowed in India because the government is not going to promote any technology that comes at the cost of jobs. To his credit, though, Gadkari is the very same minister who has actively encouraged digital transformation in transportation -- be it electric vehicles, bullet trains, metro rail, hydrogen vehicles, and the like. What may also work in his favour is that driverless cars may find most Indian roads too chaotic to function.

The fact, however, is that there is no stopping automation. Companies will continue to use AI software bots and AI-powered robots to maintain their global competitive edge. Losses of jobs due to Generative AI and foundational AI models will further queer the pitch. Will our policymakers only adopt technologies that create jobs and shun those whose impact we do not fully understand yet? If such was the case, we would have missed the benefits of the Industrial Revolution and still live in the smokestack era, one may well argue.

The case with driverless cars, though, is not all that simple.

Good and bad side of AVs

Automated vehicles can save lives and reduce injuries since 94% of serious crashes are due to human error, according to the US National Highway Traffic Safety Administration (NHTSA). There are additional economic and additional societal benefits. Automated vehicles may also provide new mobility options to millions who have some form of disability in this country. Moreover, shared self-driving car fleets can directly compete with urban taxis and public transport services. While some drivers may lose their jobs, consumers would get access to reasonably priced transport options.

And for now, many so-called driverless cars around the world are “partially self-driving” as opposed to being “fully autonomous” or equipped with the Level 5 automation that implies full automation in all conditions as defined by SAE International-- the US-based association that develops global standards for the mobility industry.

But we also have fully autonomous driverless cars already plying on some roads in the Netherlands, Finland, Norway, Sweden, China, the UK, US, and Germany. For instance, the suburban Beijing city district of Yizhuang officially lets local robotaxi operators, primarily Baidu and startup, charge fares for fully autonomous taxis with no human staff inside.

According to an 18 September Allied Market Research report, the global self-driving truck market alone is expected to be valued at $13.11 billion in 2025 and is projected to touch $41.21 billion by 2035. The global self-driving truck market is driven by factors such as the development of intelligent transport systems, the growth of connected infrastructure and improved safety coupled with a reduction in traffic congestion. However, the report also acknowledges that a rise in security and privacy concerns and software failures associated with automotive sensors are hampering the growth of the self-driving truck market.

To be sure, driverless vehicles are not above reproach. Consider this. About 20 Cruise-operated Chevrolet Bolts were recently seen stuck up and down on San Gabriel Street in Austin, Texas. A Cruise spokesperson said in a statement to The Drive, “Cruise continuously monitors its fleet, and we were alerted to a crowding event on Sunday morning. We were able to address it, and all vehicles departed the area autonomously. We apologize for any inconvenience.”

Picture courtesy of Safe Street Rebel

Further, protestors have been routinely disabling driverless cars in San Francisco, mostly run by Cruise, which is owned by General Motors, and Waymo, which is owned by Google parent Alphabet, by placing traffic cones on car bonnets, a move that foxes the sensors and paralyzes the movement of these cars. Cruise and Waymo are yet to address this issue.

Meanwhile, one of the protest groups called Safe Street Rebel has catalogued hundreds of near misses and blunders with Cruise and Waymo vehicles over the past few months, even without traffic cones. They argue, among other things, that “Robotaxis are effectively above the law. Their fleets cannot be cited for traffic violations. It is essential that this serious loophole be fixed before they are allowed to expand operations. Furthermore, as they refuse to share incident data, the public, as well as city agencies, must rely on social media posts to determine the extent of the problems they cause. A robust and independent reporting system must be put in place.”

Safe Street Rebel also underscores that robotaxi companies have made big promises about accessibility, but their “cars are not wheelchair accessible and do not pull up to the curb”. They add that shifting drivers who have been made redundant by driverless cars “to even more invisible positions in call centres and support cars makes them even easier to exploit and quashes unionization efforts”. The protest group also alleges that AV companies partner with police and serve as mass surveillance tools. “They constantly capture audio and video without our consent. This unprecedented invasion of the public’s privacy will likely have far-reaching effects on the rights of the general public.”

In short, the laws governing autonomous vehicles will have to be clear about who is responsible in case of traffic jams or tragedies such as accidents or even deaths since the courts will have to make a ruling while acknowledging the absence of a human driver. Further, there needs to be clarity on how insurance companies will tackle cases of driverless cars being wrecked, stolen, or even tampered with. These are just a few of the many issues being raised.

Also, we must consider the fact that today’s planes can easily be operated in the auto-pilot mode, even while taking off and landing. They also have black boxes that can reveal what went wrong in case of any tragedy or mishap. Yet, the laws mandate that planes must have human pilots.

In short, autonomous vehicles should be able to explain the “why” for their specific decisions. It’s in this context that Microsoft-backed London-based self-driving car maker Wayve has taken a step in the right direction. It has developed a technology called LINGO-1, which it describes as a first-of-its-kind vision-language-action model (VLAM) for self-driving, which can explain to drivers how its AI is “thinking” and making driving decisions, aiming to address the lack of trust and transparency in autonomous vehicles. It uses natural language to help people more easily understand the reasoning and decision-making capabilities of its AI Driver technology. Trained using real-world data from Wayve’s drivers commentating as they drive, LINGO-1 can explain the reasoning behind driving actions.

LINGO-1 can also respond to questions about a diverse range of driving scenes, which allows Wayve to improve the model through feedback. As an example, you can ask LINGO-1, “Why did you slow down?” The answers given allow Wayve to evaluate the model’s scene comprehension and reasoning, which can enable Wayve to more efficiently pinpoint improvements, as well as help build confidence in the system. Wayve is currently testing its self-driving technology daily on UK roads and is undertaking Europe’s largest last-mile autonomous grocery delivery trial with supermarket Asda.

As this example demonstrates, keeping humans in the loop will help address questions about trust and accountability, even though it may not please tech companies or the shareholders of these companies.


Nearly 10,000 organizations have experienced data breaches resulting in consumer data leakage in the last four years, according to a new report by Nord Security.

Source: NordPass

Source: NordPass


Musk’s Neuralink gets nod for first human trial

Neuralink, a brain-chip startup founded by billionaire entrepreneur Elon Musk, has obtained permission to kick off its first human trial. The focus of the clinical study will be on patients who have paralysis caused by cervical spinal cord injuries or amyotrophic lateral sclerosis (ALS). The announcement came on 19 September, although the specific number of participants remains undisclosed. The long-term vision for Neuralink, as articulated by Musk, goes beyond paralysis treatment. The founder has ambitious plans that extend to rapid surgical insertion of chip devices to manage an array of conditions, from obesity and autism to depression and schizophrenia.

Generative AI for CRM

Salesforce’s AI suite, Einstein, has got an update with a generative AI conversational assistant integrated into the company’s customer relation management (CRM) apps. The assistant can automate sales, service, marketing, commerce, and development tasks and be customized by customer administrators to access and reference specific data. Einstein Copilot can also utilize third-party LLMs like OpenAI’s GPT-3.5. The assistant’s operations are secured by Salesforce’s Einstein Trust Layer, which screens every AI response and records all AI interactions for compliance and auditing purposes.

Hope you folks have a great weekend, and your feedback will be much appreciated.

Download the Mint app and read premium stories
Google Play Store App Store | Privacy Policy | Contact us You received this email because you signed up for HT newsletters or because it is included in your subscription. Copyright © HT Digital Streams. All Rights Reserved