Opinion: AI needs a strong code of ethics to keep its dark side from overtaking us

3 mins read
78 views

With the advent of generative artificial intelligence, AI-industry leaders have been candidly expressing their concerns about the power of the machine learning systems they are unleashing.

Some AI creators, having launched their new AI-powered products, are calling for regulation and legislation to curb its use. Suggestions include a six-month moratorium on the training of AI systems more powerful than OpenAI’s GPT-4, a call that includes several alarming questions:

  • Should we let machines flood information channels with propaganda and untruth?

  • Should we automate away all the jobs, including the fulfilling ones? 

  • Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? 

  • Should we risk the loss of control of our civilization?

In response to these concerns, these two paths, legislative regulation or moratoria on development, have received the most attention. There is a third option: not creating potentially dangerous products in the first place.

But how? By adopting an ethical framework and implementing it, companies have a path for the development of AI and legislators have a guide to implement responsible regulation. This path offers an approach to help AI leaders and developers wrestling with the myriad decisions that appear with any new technology. 

Standing for values

We have been listening to senior representatives of Silicon Valley companies for several years, and are impressed by their desire to maintain high ethical standards for themselves and their industry, made clear by the number of initiatives that seek to ensure that technology will be “responsible,” at “the service of humanity,” “human centered,” and “ethical by design.” This desire reflects personal commitments to doing good and understandable aversions to reputational damage and long-term commercial harm.  

So we find ourselves at a rare moment of consensus between public opinion and the ethical values corporate leaders have said should guide technological development— values such as safety, fairness, inclusion, transparency, privacy and reliability. Yet despite these good intentions, bad things still seem to happen in the tech industry. 

What we lack is an accompanying consensus on exactly how to develop products and services using these values and thus achieve the goals desired by both the public and industry leaders.

For the past four years, the Institute for Technology, Ethics, and Culture in Silicon Valley (ITEC) — an initiative of the Markkula Center for Applied Ethics at Santa Clara University with support from the Vatican’s Center for Digital Culture at the Dicastery for Culture and Education — has been working to develop a system to connect good intentions to concrete and practical guidance in tech development.

The result of this project is a comprehensive roadmap guiding companies towards organizational accountability and the production of ethically responsible products and services. This strategy includes both a governance framework for responsible technology development and use, and a management system for deploying it.

The approach is laid out in five, practical stages suitable for leaders, managers, and technologists. The stages address the need for tech ethics leadership, a candid assessment of organizations’ cultures, the development of a tech ethics governance framework for each organization, means for embedding tech ethics into the product development life cycle for new technologies and transforming the organization’s culture, and methods for measuring success and continuous improvement.

People working in organizations developing new and powerful technologies of all kinds now have a resource that has been missing — one that lays out the difficult work of bringing well-considered and necessary principles to a level of granularity that can guide the engineer writing code or the technical writer drafting users’ manuals. It provides, for example, how to go from a principle calling for AI that is fair, inclusive and non-discriminatory to examining usage data for signs of inequitable access to a company’s products and developing remedies. 

Our belief is that such guidance for getting specific when moving from principles to practice will promote agency and action among tech leaders.  Rather than doing little or nothing about a nebulous impending tech-doom, industry leaders can now check their practices to see where they might improve. And they can ask their peer organizations if they are doing the same.

We have done our best to build on the work already being done in industry and add to it what we know about ethics. We believe we can build a more just and caring world. A more ethically responsible tech industry and AI products and services are possible. With the stakes so high, it has to be worth it.

Ann Skeet and Brian Green are authors of “Ethics in the Age of Disruptive Technologies: An Operational Roadmap” (The ITEC Handbook) and colleagues at the Markkula Center for Applied Ethics at Santa Clara University. Paul Tighe is secretary of the Vatican’s Dicastery for Culture and Education.

More: Former Facebook security head warns 2024 election could be ‘overrun’ with AI-created fake content

Also read: Religion is mixing with business and raising workplace questions for employers

Read the full article here

Leave a Reply

Your email address will not be published.

Previous Story

Netflix price increase: Here’s how much the major streaming services are set to cost

Next Story

Treasury says Hamas leaders ‘live in luxury’ as it unveils new sanctions

Latest from Economy