We Don’t Really Know How AI Makes Its Decisions. How Big of a Problem Is That?
Human + AI > AI
By Troy Lowry
I’ve written many times about how we don’t really understand how AI makes its decisions. How much of a problem is that really?
Imagine yourself in court. You are on the witness stand in a trial where some group has claimed that your school’s admission practices are not consistent with a recent Supreme Court decision. The prosecutor asks you how your school decides who to admit. You answer: “We let the AI do it.” The prosecutor pushes you for answers on which factors the AI uses to decide. You can’t answer definitively.1
Even if it’s true that AI decisions are more consistent and less biased than the average human’s,2 I suspect this would be a difficult situation for both you and your school to be in, and to me just the thought is a compelling reason not to use AI in admission decisions unless closely overseen and audited by a human.
Then again, science has shown us that humans don’t always know how they make decisions, as evidenced by unconscious bias. Still, humans, at least for now, prefer human judgement to machine judgement.
Imagine an alternate scenario, one where the AI makes the decisions, but you double check and approve all of them before they are final. You, the human, make the final decision and ensure fairness. This scenario seems much more comfortable to me.
How AI “Thinks”
In my post with a high-level review of how AI “thinks,” I talked a lot about statistics. For a more in-depth look, I’d encourage everyone to read Timothy Lee’s excellent article on the topic. One key concept is “word vectors.” Computers only understand numbers and AI runs on computers, so to understand words, it must first convert them to numbers. However, it doesn’t just convert them to a single number, rather it converts them to a large array3 of numbers.
These numbers can then be used to do word math. For instance, the word vector for “dog” plus the word vector for “sound” equals the word vector for “bark.”5 It then uses this word math to predict the next word. If you ask it to write a letter for you, it will literally predict every word one after another until it determines it has met your request.
You might have read that some AI uses neural networks, which are based on the way human brains operate. Neural networks may sound mysterious and magical, but at their root, they too are statistical engines .
AI is nothing but statistics. It is statistics all the way down .
How Humans Think
As humans we like to believe that a) an event occurs b) we process it in our rational brains and then c) we react accordingly. Science says otherwise. Studies show that actually a) an event occurs b) we react and then c) we rationalize post-hoc about why. In short, we make up stories about why we reacted instead of deciding how to react.6
In a series of incredibly interesting studies back in the 1960s, Roger Sperry worked with people who had the two halves of their brains separated to help abate seizures. In one study, one eye was shown the word “walk.” Often subjects would get out of their chair and start walking. When asked why, they would answer something like “I’m going to get a coke,” completely unaware that they had just been prompted to walk and made up a rationalization to cover it.
In short, we really don’t understand how humans think either. In fact, it might be statistics all the way down as well.
Where Does That Leave Us
In conclusion, AI decision-making and human thought have a fascinating parallel: both are shrouded in mystery and driven by complex mechanisms that aren't fully understood. While AI operates on a foundation of intricate statistical models and neural networks, human decisions are often the result of subconscious influences and post-hoc rationalizations.
The key takeaway here is the need for a balanced approach. Relying solely on AI for decisions, when it is currently impossible to truly understand its inner workings, has both real and perceived risks associated with it, as illustrated by the courtroom scenario. Conversely, human decisions, while more interpretable, are not free from bias and error.
The ideal path forward seems to be a synergy of human oversight and AI efficiency, ensuring decisions are not only fair and consistent, but also transparent and accountable. Using human decision making as a double check for AI decision making instead of allowing the AI to be unfettered and unaccountable is a way to reduce the unknowns and biases of both AI and humans.
As we continue to delve deeper into the realms of AI and human cognition, let's strive for a future where technology augments human judgment, rather than replacing it, in our relentless pursuit of fairness and progress.
- If you can answer definitively, then you really aren’t using AI. While marketers will label almost any product “AI” these days, a clear list of factors to make a decision is the realm of traditional computer programming, not AI.
- A claim I’ve heard made but would need hard evidence specific to the particular AI model and use case to believe
- ChatGPT converts each word4 into 12,288 numbers.
- Even that is an oversimplification. Sometimes it breaks words into pieces called “tokens,” and each token has 12,288 numbers.
- It doesn’t exactly equal “bark.” Rather, of the approximately 50,000 word vectors in ChatGPT, “bark” is closest to the results.
- Of course, this is only one segment of human thought and an oversimplification even there, but it does illustrate how we don’t truly understand how humans think.