The auto industry is quite familiar with AI as it relates to automobile safety. Driver assistance technology AI will work at protecting consumers from almost every type of distracted driving. Unfortunately, as evidenced by autonomous driving systems crashes, AI makes mistakes. People are overlooked, cars are missed, and accidents happen. Many are even deadly.

Another danger that occurs is the loss of knowledge and ability. With automotive safety, drivers no longer feel the need to check over their shoulder when backing up because of RCTA, for example. The dependence on AI becomes a crutch, and it can be a dangerous one.

But that danger isn’t just in the cars themselves, but in the companies that make them. As AI is being adopted into automobiles, it’s also adopted into the automotive business and marketing. The adoption of the new carries with it a fear of missing out, but, as a new McKinsey study points out, “Few leaders have had the opportunity to hone their intuition about the full scope of societal, organizational and individual risks” of adopting AI. Consumer data privacy protection, untested interaction models, and faulty algorithms can lead to major issues. OEMs, dealersand other automotive based organizations could find themselves having trouble mitigating the risks. 

One clear observation from this study is the importance of establishing an organizational responsibility related to AI benefits and unintended consequences because AI use will happen. Robust structures and processes to identify and control all key risks require more than most corporations can handle, and it can be considerably overwhelming at the dealership level.  

And although we’ll have to delay business objective discussion to another time, McKinsey suggests AI’s rapid adoption won’t come without its own set of problems. 

As dealers begin to adopt systems or hire out the process related to customer service and acquisition, the data’s protection would be paramount. McKinsey says, “Making real progress demands a multidisciplinary approach involving leaders in the C-suite and across the company; experts in areas ranging from legal and risk to IT, security, and analytics; and managers who can ensure vigilance at the front lines.”  

But what about cars? 

While AI implementation in the automotive C-Suite is going slowly, the same thing can’t be said for the cars customers are buying. More AI is placed into vehicles each year, and soon, we won’t even need to drive. Let’s look at a few applications of McKenzie’s report as it relates to the vehicles: 

Car manufacturing  

Visit any OEM manufacturing plant, and you’ll see robots working alongside humans while building the latest models. Audi, among many, has been at the forefront of using robots to do more challenging tasks and do so with a preciseness that can’t be humanly accomplished. The robots can also use machine learning and mimic their human co-worker in a way that can be scary. McKenzie’s report suggests that we’ll eventually see robots without humans, which can make for some challenging HR issues that corporations aren’t prepared to handle. 

Autonomous driving 

Another challenge to prepare for is the driverless car. With AI, we now have fully autonomous driving. AI technology may provide simplicity to the process but will it cause legal issues when there are accidents. And there will be. If your consumer is riding around in a car you made with the software you’ve installed, who is responsible for the accident of damage? As we move into fully autonomous automobiles, corporate and dealer risk-mitigation will need to be thought out. Industry leaders will need to understand the law of unintended consequences and have a clear plan on dealing with them. 

Connected cars services 

Every manufacturer has them, and they’re one of the perfect platforms for AI’s ability to gather data and make a predictive analysis for good. Connected car services allow drivers to pay for services while driving (or riding) by stores on their way to any location. And, most likely, they’ll do it while tied into your favorite phone operating system. Although access to real-time data gives manufacturers and their partners powerful access to owners’ habits like never before, the protection of that data from hackers and harmful manipulation will be the biggest challenge in the years ahead. 

Final thoughts 

Building AI technology has never been a challenge. Acknowledging risks associated with that technology and setting up systems to reduce that risk is the challenge. 

As McKinsey says, “The survey findings suggest that a minority of companies recognize many of the risks of AI use, and fewer are working to reduce the risks—as was true in 2019. Cybersecurity remains the only risk that a majority of respondents say their organizations consider relevant. Overall, the share of respondents citing each risk as relevant has remained flat or has decreased, except national security.

One would hope that a considered focus on learning tempers the desire for quick adoption. Most importantly, there’s quite a bit to learn about potential risks to organizations, individuals and society.

What is the balance between innovation and responsibility? Or creativity and ethics? And how do we manage something that we can’t imagine?

AI is powerful, and we’re entering into a brave new world of possibilities, But AI also demands adherence to responsibility, ethics, and self-policing. According to McKenzie, “Organizations that nurture those capabilities will be better positioned to serve their customers and society effectively; to avoid ethical, business, reputational, and regulatory predicaments; and to avert a potential existential crisis that could bring the organization to its knees.”

Did you enjoy this article from Steve Mitchell? Read other articles from him here.

Be sure to follow us on Facebook and Twitter to stay up to date or catch-up on all of our podcasts on demand.

While you’re here, don’t forget to subscribe to our email newsletter for all the latest auto industry news from CBT News.