From 2019, Updated for 2023: Three People-Centered Design Principles for AI
Images courtesy of Pexels.com licensed CC0, Diagram of the Mindful Monitoring Approach produced by D. Bray

From 2019, Updated for 2023: Three People-Centered Design Principles for AI

Dear Colleagues,

Back in 2019, friend and collaborator R "Ray" Wang and I co-authored an article for MIT Sloan Management Review entitled "Three People-Centered Design Principles for Deep Learning" based on what both of us were seeing in terms of the work with both companies and governments using AI at the time. Though four years ago, when in the AI space could be consider many digital lifetimes ago, in revisiting the piece for 2023, I was struck how much of our proposals to ensure AI was more people-centered apply now to Large Language Models and other forms of Generative AI.

Bad data and poorly designed AI systems can lead you to spurious conclusions and hurt customers, your products, and your brand.

So here is a version of that piece (the original is here at MIT Sloan Mgmt Review) updated for 2023 - and for anyone telling you we cannot have some semblance of safeguards for AI, I'd suggest sharing with them this piece and its emphasis on: (1) Creating data advocates, (2) Establishing mindful monitoring of data pools, (3) Communicating bounded expectations.

Design Principles for Deep Learning

Over the past decade, organizations have begun to rely on an ever-growing number of algorithms to assist in making a wide range of business decisions, from delivery logistics, airline route planning, and risk detection to financial fraud detection and image recognition. We’re seeing the end of the second wave of AI, which began several decades ago with the introduction of rule-based expert systems, and moving into a new, third wave, termed perception and generative AI. It’s in this next wave where a specific subset of AI, called deep learning, will play an even more critical role.

Like other forms of AI, deep learning tunes itself and learns by using data sets to produce outputs — which are then compared with empirical facts. As organizations begin adopting deep learning, leadership must ensure that artificial neural networks are accurate and precise because poorly tuned networks can affect business decisions and potentially hurt customers, products, and services.

The Importance of People-Centered Principles for AI

As we move into this next stage, the key question for organizations will be how to embrace deep learning for driving better business decisions while at the same time avoiding biases and potentially bad outcomes. In working with numerous clients across multiple industries, we have determined patterns that can help companies reduce error rates when implementing deep learning initiatives.

Our experiences working with organizations in these early stages of AI adoption have helped us create design principles for a people-centered approach to deep learning ethics with a strong focus on the data employed to tune networks. A designed people-centered approach helps address both short-term concerns — poorly trained AI networks that produce spurious solutions — as well as the long-term concerns that machines might displace humans when it comes to business decision-making.

When we talk about people-centered design, we mean principles that provide benefit to all individuals and communities compared with instances where only a few individuals benefit at the expense of others. Our people-centered design principles support the goal of providing and informing with data to allow people more opportunities in their work. In our experience, there are three key design principles organizations need to hold up as pillars for any AI implementation:

  1. Transparency. Wherever possible, make the high-level implementation details of your AI project available for all of those involved. People should understand what AI is, the way it works to include how data sets are used to tune algorithms, and how AI may affect their work. When intellectual property or other sensitive information might be exposed, an organization may want to include a panel of external stakeholders, keeping in mind that certain data sets might need to be protected from disclosure if they contain sensitive information or there are privacy concerns.

  2. Explainability. Employees within an organization and external stakeholders, to include potential customers, should be able to understand how any AI system arrives at its contextual decisions. The focus here is less on an explanation of how the machine reached its conclusions — as AI often cannot be explained at that level of detail — and more on the level of what method was used to tune the algorithm(s) involved, what data sets were employed, and how human decision makers decided to use the algorithm’s conclusion.

  3. Reversibility. Organizations also must be able to reverse what a deep learning effort learns. Think of this as the ability to unlearn certain information or data, which helps protect against unwanted biases in data sets. Reversibility is something that must be designed into the conception of an AI effort and often will require cross-functional expertise and support.

Three Methods to Put Design Principles into AI Actions

In addition to these three design key principles, there are three methods that companies can take to put these principles into action in their AI projects. These methods aim to reduce the risk of introducing poorly tuned AI systems and inaccurate or biased decision-making in pilots and implementations.

  • Create data advocates. To reduce the risk of poorly tuned artificial neural networks, organizations can create a data advocate or ombudsman function that brings together human stakeholders from different business units (with potential to include outside stakeholders as well). Data advocates are responsible for ensuring the data sets are both appropriate for the questions being asked of any artificial neural network and sufficiently diverse for optimal tuning.

AI efforts trained on bad data can pose risks for human workers and favor human biases. By proactively setting up a data advocate function, organizations can leverage AI while benefiting from human oversight to ensure any errors or flaws in data sets and AI outputs are caught early.

  • Establish mindful monitoring of data pools. Another way to reduce risk is for organizations to establish a mindful monitoring system to test data sets for biases. This technique requires identifying three pools of data sets: (1) trusted data — the “trusted pool”; (2) potentially worthwhile data — the “queued pool”; and (3) problematic or unreliable data — the “naysayer pool.” (See “The Mindful Monitoring System for AI.”) In this type of monitoring system, the data outputs from a deep learning system — which are tuned on a queued pool of data (yet to be fully vetted or accepted) — are compared with the outputs from the trusted pool of data.

Organizations can more robustly prepare for AI implementation by focusing on a multipronged monitoring approach; such a mindful monitoring approach includes:

  1. Trusted Data Pool: This vetted data is fit for training AI systems. Monitoring Actions for a Company include: Regularly assessing if previously approved data might now be obsolete, problematic, or unreliable.

  2. Queued Data Pool: Data that may be useful for training AI, but has not been vetted. Monitoring Actions for a Company include: Regularly assess if this data can improve the company's existing pool of trusted data.

  3. Naysayer Data Pool: Data that is unfit for training AI. This data pool monitors data in other pools for outdated, inaccurate data as well as potential data poisoning attempts. Monitoring Actions for a Company include: Regularly assess the robustness and diversity of the data used to train the deep learning system.

Diagram of the Mindful Monitoring Approach to more robustly prepare an organization's data for AI implementation

For example, a company’s trusted pool of data for deep learning training might include already classified images of street signs and the appropriate action to take at each sign. The queued pool may include additional images of street signs at different angles, in different lighting conditions, and different weather conditions — with tagging by an unvetted source external to the company. By combining human and automated review, the organization can then assess if the queued pool of data can be useful in expanding the company’s existing pool of trusted data. This allows the deep learning system to improve and get smarter while monitoring and protecting against inaccurate data.

At the same time, the organization would want to compare data outputs from the queued pool and the naysayer pool. For the same example, the naysayer pool might include images that look like street signs but aren’t.

The goal of the naysayer pool is to challenge the robustness and diversity of the data used to train the deep learning system and to check if previously approved data for the organization might now be obsolete, problematic, or unreliable.

Communicating bounded expectations. Organizations also should clearly specify how data sets will be used to train AI networks, and explain to external stakeholders and internal employees what the accepted norm will be for how the company relies on the data gathered with deep learning. For example, the organization may use data sets on financial transactions for the last seven years to inform what credit cards to offer customers — but it will not use its deep learning system to make credit card offers on the basis of gender or race, which would be immoral and illegal. This method of setting bounded expectations requires a clear list of what the organization can do with the data it generates or acquires, along with what it cannot do. Companies should also make clear the steps that have been taken to verify these bounds — ideally through a third party, such as an outside compliance review.

These methods specifically focus on deep learning, given the dependence of artificial neural networks on the data- and people-centered choices organizations make to produce optimally trained algorithms. Taken together, they can help organizations prepare for implementing successful AI programs in the future that avoid major risks to both individuals and to communities.

We can do more people-centered AI, and the time to start - if not already - is now.

Joe Boutté

Change Agent, Servant-Leader, Strategic Advisor, Systems Engineer, Consortia Member @ QED-C | Quantum Ecosystem, Data Hog/Connoisseur, Aspiring Prompt Engineer

7mo

Great update. I'm especially interested in the data advocates (DAs) who ensure human oversight, diverse insights, and timely correction. DAs are essential in this emerging environment to mitigate risks associated with faulty data, errors, biases, and silos. The need for DAs exemplifies how AI is changing the workforce, needed skills, and creating trust. Along with mindful monitoring of data pools to ensure quality, protection against data poisoning, and iterative reviews, the article provides actionable approaches to address many issues, including people-centered design. Thanks for the revisit and update. 😎

Diana Wu David

Futurist | Financial Times Faculty | Author | Keynote & TEDx Speaker | Board Director

7mo

Thoughtful approach David and R "Ray". Do you feel that most companies have the capacity or capabilities to execute on these recommendations? Is this being spearheaded more by the IT function or are you seeing more broad based responsibility?

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics