[ad_1]
AI-related products and technologies are built and deployed a social context: It is a dynamic and complex collection of social, cultural, historical, political and economic circumstances. Because social contexts are by nature dynamic, complex, nonlinear, contested, subjective, and highly qualitative, they are difficult to capture in the quantitative representations, methods, and practices that dominate standard machine learning (ML) approaches and responsible AI product development practices.
AI is the first stage of product development Understanding the problemAnd this phase has enormous implications for how problems (eg, increasing the availability and accuracy of cancer screening) are framed for ML systems to solve as well as many other decisions, such as datasets and ML architecture choices. When the social context in which the product will operate is not sufficiently well established to lead to a robust understanding of the problem, the resulting ML solutions can be fragile and even propagate unfair biases.
When AI product developers do not have access to the knowledge and tools necessary to effectively understand and consider the context of society during development, they tend to abstract it away. This abstraction leaves them with a shallow, quantitative understanding of the problems they are trying to solve, while product users and community stakeholders—who are close to those problems and involved in the associated social context—have a deep qualitative understanding of those same problems. This qualitative-quantitative difference in ways of understanding complex problems that separates product users and the community from developers is what we call The problem of understanding the gap.
This gap has real-world repercussions: For example, it was a major cause of racial bias found by a widely used health care algorithm designed to solve the problem of selecting patients with the most complex health care needs for special programs. An incomplete understanding of the social context in which the algorithm would operate led system designers to formulate incorrect and oversimplified causal theories about what the underlying problem factors were. Critical socio-structural factors, including lack of access to health care, lack of trust in the health care system, and underdiagnosis due to human bias, They were omitted when health spending was highlighted as a predictor of complex health needs.
To bridge the problem understanding gap responsibly, AI product developers need tools that put community-validated and structured knowledge about complex societal problems at their fingertips—starting with problem understanding and throughout the product development cycle. To that end, Societal Context Understanding Tools and Solutions (SCOUTS)—part of the Responsive AI and Human-Centered Technologies (RAI-HCT) team at Google Research—is a dedicated research team focused on the mission to “empower people with the scalable, reliable social context knowledge needed to solve the most complex social problems. SCOUTS is motivated by the important challenge of articulating social context, and it conducts innovative fundamental and applied research to create structured social context knowledge and integrate it into all phases of the AI-related product development life cycle. Last year we announced that Jigsaw, Google’s incubator for building technologies that explores threat solutions for open societies, applied our structured social context knowledge approach to the data preparation and evaluation phases of model development to mitigate scale bias for their widely used Perspective API toxicity classifier. SCOUTS’ further research agenda focuses on the problem understanding phase of AI-related product development to bridge the problem understanding gap.
Overcoming the Artificial Intelligence Problem by Understanding the Gap
Bridging the gap in understanding AI problems requires two key components: 1) a frame of reference for organizing knowledge in a structured social context, and 2) participatory, non-extractive methods for gathering community expertise on complex problems and representing it as structured knowledge. SCOUTS has published groundbreaking research on both fronts.
An illustration of the problem of understanding the gap. |
A social context frame of reference
A necessary ingredient for the production of structured knowledge is a taxonomy to create a structure to organize it. SCOUTS collaborated with other RAI-HCT teams (TasC, Impact Lab), Google DeepMind and external system dynamics experts to develop a taxonomic reference framework in a social context. To address the complex, dynamic, and adaptive nature of social contexts, we use Complex Adaptive Systems (CAS) theory to propose a high-level taxonomic model for organizing knowledge of social contexts. The model identifies three key elements of the social context and the dynamic feedback loops that link them together.: Agents, Commandments, and Artifacts.
- agents: These can be individuals or institutions.
- Commandments: preconceptions—including beliefs, values, stereotypes, and biases—that constrain and condition the behavior of agents. An example of a basic principle is that “all basketball players are over 6 feet tall.” This limited assumption may lead to failure to identify undersized basketball players.
- artifacts: Agent behavior produces many kinds of artifacts, including language, data, technologies, social problems, and products.
The relationships between these entities are dynamic and complex. Our work suggests that precepts are the most critical element in the context of society, and we emphasize Problems that people perceive and They have causal theories about why these problems exist as particularly influential principles that are fundamental to understanding social context. For example, in the case of racial bias in the medical algorithm described earlier, the designer’s principle of causal theory was that Complex health problems will increase health care costs for all populations. This misspecification directly led to the selection of health care costs as a variable for the model to predict complex health care needs, which in turn biased the model against black patients who, due to social factors such as access to health care and average biases due to underdiagnosis, do not always spend more on health care when they have complex needs. A key open question is how can we ethically and fairly extract causal theories from the people and communities closest to the problems of inequality and transform them into useful structured knowledge?
An illustrative version of the social context frame of reference. |
A taxonomic version of the social context frame of reference. |
Working with communities to promote the responsible use of AI in healthcare
Since its inception, Scouts has worked to build capacity in historically marginalized communities to reflect the broader social context of complex issues important to them using a practice called Community-Based System Dynamics (CBSD). System dynamics (SD) is a methodology for articulating causal theories about complex problems qualitatively as causal loop and supply and flow diagrams (CLDs and SFDs, respectively) and Quantitatively as simulation models. Inherent support for visual qualitative tools, quantitative methods, and collaborative model building make it an ideal ingredient for understanding the problem gap. CBSD is a community-based, participatory variant of SD specifically focused on building capacity in communities to collectively describe and model the problems they face as causal theories without direct intermediaries. With CBSD we have seen community groups learn the basics and start drawing CLDs within 2 hours.
AI has enormous potential to improve medical diagnostics. But the safety, equity, and reliability of AI-powered health diagnostic algorithms depend on diverse and balanced training datasets. An open challenge in the health diagnostic space is the lack of study sample data from historically marginalized groups. SCOUTS collaborated with the Data 4 Black Lives community and CBSD experts to develop qualitative and quantitative causal theories for the data gap problem. Theories include critical factors that shape the broader social context surrounding health diagnosis, including cultural memory of death and trust in medical care.
The figure below illustrates the causal theory that emerges during the collaboration described above as CLD. It suggests that trust in health care affects all parts of this complex system and is a key lever to increase screening, which in turn creates data to address the data diversity gap.
Causal loop diagram of a health diagnostic data gap |
These community-derived causal theories are the first step in bridging the gap between problem understanding and reliable social context knowledge.
conclusion
As discussed in this blog, the understanding gap problem is a critical open challenge in responsive AI. SCOUTS conducts exploratory and applied research in collaboration with other Google Research teams, the external community, and academic partners across multiple disciplines to make significant progress in solving it. Going forward, our work will focus on three key elements, guided by our AI principles:
- Increase awareness and understanding of the problem gap and its implications through talks, publications and training.
- Conduct fundamental and applied research to demonstrate and integrate social contextual knowledge into AI product development tools and workflows, from concept to monitoring, evaluation, and adaptation.
- Apply community-based causal modeling methods in the AI health capital domain to understand the impact and develop community and Google capacity to produce and use global-scale social context knowledge to realize responsible AI.
Scouts flywheel to understand the gap to overcome the problem. |
Acknowledgments
Thanks to John Gilliard for developing the graphics, everyone at Scout and all our collaborators and sponsors.
[ad_2]
Source link