[ad_1]
“Watch what you wish for” isn’t just an all-purpose cautionary admonishment – it’s the current state of affairs as organizations of all kinds grapple with the ripple effects of Generative artificial intelligence (AI) applications like ChatGPT, Bing, Bard and Claude. In the race to grab market share, companies such as Microsoft and Google have made AI ethics and controls a distant second priority (paywall), categorically exposing users and society at large to a new world of risk, a few random examples of which include:
- Confidential Samsung meeting notes and new source code have been inadvertently released in the wild after being leaked externally via ChatGPT.
- ChatGPT has falsely accused a George Washington University law professor of sexual assault after naming him in a list of legal staff scholars who have assaulted someone.
- Fundamental to their design, ChatGPT and other large language model (LLM) AIs freely hallucinate, producing mistakes in generated text that are semantically or syntactically plausible but are in fact incorrect or nonsensical.
Separately and together, these phenomena ignite bias and disinformation. Although a more than a thousand tech industry leaders have called for a six-month moratorium on further Generative AI development, it’s too little, too late. The proverbial cows are out of the barn – and companies must act swiftly and forcefully to curb the harms that ChatGPT and LLMs are perpetuating at an astonishing rate.
LLMs are “black box” AI systems
But first, let’s understand how Generative AIs like ChatGPT essentially manufacture bias and disinformation. Most LLM AIs work on a highly opaque premise, grouping facts together probabilistically. These probabilities are based on what the AI has learned from the data it was trained on, and how it has learned to associate data elements with one another. However, none of the following details get surfaced when using ChatGPT:
- We don’t have an explanation of how the AI has learned or model interpretability.
- We don’t have access to the specific data used, or the odds derived to determine whether or not we should trust the generative AI.
- We are not given the discord to reason or challenge the outcome.
Thus, because the hidden probabilities and associations in LLM generative AIs are not surfaced and shared, they are simply another form of “black box” AI under a veneer of clever and engaging banter. It’s impossible to understand whether we should trust Generative AIs’ output to make a decision, and it’s wrong to treat the answer as absolute truth.
Correlations are not causalities
When we interrogate a generative AI, we implicitly aspire to get causal explanations for outcomes. But machine learning models and generative AIs are looking for correlations or probabilities, not causality. That’s where we humans need to insist on model interpretability – the reason why the model gave the answer it did – and truly understand if an answer is a plausible explanation, versus taking the outcome at face value and risking erroneous correlation.
Until generative AIs can answer this call to support scrutiny of its ethical, responsible and regulatory underpinnings, it should not be trusted to provide answers that may significantly affect people’s well-being or financial outcomes.
ChatGPT is not trained on trusted data
When data scientists build a machine learning model, we study and understand the data used to train it. We understand that the data is inherently full of bias and representation issues. The accuracy of LLM Generative AI systems depends on the corpus of data used and its provenance.
ChatGPT is mining the internet for training data – that’s a lot of data, and a lot of it is of unknown or questionable provenance. Furthermore, this data that may not be properly governed, may not managed for data bias, or may be being used without consent. This reality essentially foments bias and makes it impossible to assess the accuracy of LLMs’ responses to questions to which we don’t know the answer.
Alarmingly, additional different levels of inaccuracy can be amplified by AIs themselves or through adversarial data attacks, recreating or injecting data to strengthen data misrepresentation. All these issues spell inaccuracy, trouble, ethical concerns and ultimately ‘answer drift’ – the semantic equivalent of 1+2=3 today, but 1+2=7 tomorrow. Collectively they constitute unknown potential liability and risk.
How to reduce bias in Generative AI systems
In my data science organization, we’ve used generative AI for more than 10 years to create types of data that we don’t see today, to simulate a particular kind of effect. For example, in fraud detection, we do that by learning what normal customer behavior looks like by using all related statistics, probabilities and collaborative profiling technologies applied to carefully curtailed data. We then apply a data generation specification for producing simulation data; data scientists will specify, “These are 5, 10 or 20 behaviors I want to see in my robustness study, such as 3% of transactions being out of order,” or “0.1% percent of the time you’ll have two or three very large transactions within one to two minutes.” It’s a rule-based generation specification, and every step is auditable and subject to scrutiny.
The generated data is always labeled as synthetic data versus real data. This allows us to understand where in our model and processes that data is allowed to be used, or not. We treat it as walled off data for the purposes of test and simulation only; synthetic data produced by generative AI does not inform the model going forward in the future. We contain this generative asset and do not allow it ‘out in the wild.’
Responsible AI practices are imperative
The AI industry has worked extremely hard to build trust in AI through Responsible AI initiatives including AI governance and model development standards. Responsible AI includes Robust AI, Explainable AI, Ethical AI, and Auditable AI as tenets, underscoring the fact that AI models are just tools, not gospel. As statistician George Box said, “All models are wrong, but some are useful;” we’ve long wished for Generative AI that seems almost magically intelligent, but assuming the answers that ChatGPT provides to be anything more than “potentially useful” is imprudent and downright dangerous.
About the Author
Scott Zoldi is chief analytics officer at FICO responsible for the analytic development of FICO’s product and technology solutions. While at FICO, Scott has been responsible for authoring more than 110 analytic patents, with 71 granted and 46 pending. Scott is actively involved in the development of new analytic products and Big Data analytics applications, many of which leverage new streaming analytic innovations such as adaptive analytics, collaborative profiling and self-calibrating analytics. Scott is most recently focused on the applications of streaming self-learning analytics for real-time detection of cyber security attacks. Scott serves on two boards of directors, Software San Diego and Cyber Center of Excellence. Scott received his Ph.D. in theoretical and computational physics from Duke University.
Related
[ad_2]
Source link