Most people try to do the right thing most of the time. But “right” is relative, of course. This has been clear in the recent AI boom. Some hail it as world-saving, and others decry it as apocalyptic.
The global tech industry is expanding these new technologies. At HBR, we pondered: What values guide tech leaders? What ideologies, cultural expectations, and mindsets inform their priorities? And what risks do these ethical frameworks carry concerning how AI will be developed?
We asked six experts on the tech industry’s history and AI’s ethics to weigh in on these questions. Their responses show the culture and mindset driving tech decision-making. They also reveal what today’s leaders’ ethos can tell us. It’s about the opportunities and threats we all face tomorrow. We’ve split their edited responses into three sections. One is about the industry’s culture of speed. Technologists need to think about the broader context of their products. They do this because commonsense guardrails are often an afterthought. This needs to change.
A Glamorization of Speed
Today, generative AI is the next big thing. Industry leaders warn of the technology’s dangers. They urge a six-month pause on training more advanced AI systems. But, the same age-old Silicon Valley mindset appears to be re-emerging. A strong desire for speed and growth is ingrained. It may hinder efforts to put in adequate guardrails. “I’ve done this for 50 years,” said the former Google CEO Eric Schmidt. He said this during this week with George Stephanopoulos in early April. “I’ve never seen something happen as fast as this.”
Technologists have feared intelligent computers for over 50 years, and they have feared robots taking over for even longer. Today, big language models show that computers are getting close to human intelligence.
Yet, the most significant danger isn’t the technology. The ethos and business needs have long defined its builders. Humans design AI tools. They make mistakes. This leads to biased algorithms, hackable security, and deadly disinformation.
In the world of tech, speed is nothing new. AI systems are advancing fast. They surprise even seasoned observers. Only time will tell how much we’ll break if we move this fast.
An Obliviousness to the Broader Context
Marketers have long known the idea of finding target customers. They also know how to meet their needs. But in the AI race, many companies still need to align developers’ goals with customers. This causes them to ignore ethical concerns.
For example, they competed to make large language models. OpenAI made ChatGPT, and Google made Bard. However, these tools only sometimes produce helpful and harmless content. Indeed, these models often copy the biases of humans in the online data on which they were trained. They rely on all text data on the Internet. They do not focus on accuracy to offer misleading or wrong responses.
So, what’s the fix? AI developers must build their tools around the needs and priorities of those using them. They must also consider the context. If so, they’ll be forced to make ethical decisions at every step. They must build products that are not profitable but also create real value for users.
A Lack of Guardrails
In 2021, computer scientist Inioluwa Deborah Raji warned of this. She is an expert in auditing algorithms. Many tech leaders have thought their products should be “fair” or “ethical.” However, they should have done the audits needed to ensure this. To build tech that does the right thing, we need to know what it does and how it does it. That means leaders must find biases in their products. The biases are against different groups. They are based on race, gender, ability, or other identities. For example, if your facial recognition tool performs better with people from particular racial groups. You might have an algorithmic bias issue. Developers must test their AI systems often. They must be ready to change the code or even cancel a project when problems are found.
Because these biases are often unconscious, proactivity is imperative. Beyond reviewing the end product, tech leaders must audit the code, data, and models for use in AI systems before the systems are released. The Biden administration has proposed a law to mandate this auditing. Similar rules will likely be adopted worldwide. Business isn’t about more than completeness. It’s about working to ensure our technology does what we say and achieves the equity outcomes we want.