I read NOS.nl almost every day. But every now and then I scratch my head at their IT articles. Like now again with this article.
The thing is, AI is not a pure "black box", although for laypeople it often feels like that. And AI is not an "uncontrollable force". AI is in its core a system that recognizes and applies patterns with probabilities, among other things to understand and generate human language.
AI is a term we almost physically get a gag reflex from. We know it by now. But still it turns out that many people do not understand it yet. Then it is good to know that it is all not as magical as it seems. Far from it even.
Briefly, but going into depth
I am going very briefly very deep, so hold on. But then we also have that part done. It works like this, AI is not a black box in the sense that there is no insight or control over the system as a whole. On the contrary, at system level there is a lot of control. AI is an umbrella term for a lot of forms. And more importantly, AI is steered by choices of makers, through data, model architecture and limitations they build in. That starts already with the data you train it on and continues through the code and configuration that determine how it behaves.
That "scary steam train" is mostly framing
AI is now being presented as the scary steam train from 1800, which rushes by with thunderous noise and is almost devilish. People have always had trouble with abrupt changes. But it is not even that impactful. In 2018 software writers were already making NLP for input and NLG for output. For those who do not know that, NLP tries to understand human language and NLG tries to generate human language as an answer.
The reason is simple. People communicate with computers in language. And NLP and NLG are ways to let an old rusty machine like a computer, with all its zeros and ones, understand something and talk back in a way that feels logical for humans.
The outside versus the inside
But that is only the interfacing layer, in plain language the outside. Inside it mostly still works the same in the sense that it revolves around data, models and code. NLP and NLG are the outer shell so that a computer can talk better with humans. Internally AI is steered by a combination of learned patterns and by human set constraints and frameworks.
In practice there are all kinds of forms of routing and steering, which determine how input is processed and how an answer is formed. Depending on the system, input can go through different components, or be processed directly by one model into an answer. After that a layer of "powdered sugar" is added on top to turn it into a sentence that reads nicely for humans.
But that layer is only for you as a user. It does not determine the facts. The content of such an answer comes from trained models, data and code, and thus ultimately from human choices.
The myth of the black box
Long story short, it is not incomprehensible magic, but also not a fully transparent system at detail level. That media term has become popular because it sounds nice and scary. But there is nothing mystical about it. It is a bit like how people define "luck". What they do not understand, they call luck. In reality there are often just more factors than they can themselves oversee or calculate.
Where it can go wrong
What one really should understand, is that programmers and their managers make choices. Should an AI give medical advice? No, of course not. Never even. If you type "do I have jaundice or are my kidneys failing?", then a computer cannot measure what is really going on with you. For that you need a doctor, with education, experience and physical tests.
Just as you should not ask AI whether the wallpaper in your bedroom is nice. That is partly a user problem, AI cannot look into your bedroom. But also a programmer problem, why do you let an AI answer that at all?
The real responsibility
There is no magical black box that determines everything. There is a lack of sensible choices, by users and by programmers. And the marketers above those programmers are perhaps even more to blame. Do not tell people that AI is magic. It is not. It is mathematics.
AI absolutely has use. But how AI arrives at answers depends on the data you input, the freedom people give it and the controlled systems behind it. The data and boundary conditions are determined by humans, and with that you can steer the behavior of an AI in broad lines.
You can train an AI quite well like an obedient dog. Dangerous AI only arises when incompetent people design it or use it incorrectly.
In closing
My point is simple. It is too easy to find AI scary, while what most people see is mainly a system that generates sentences in a way that looks like a conversation. The content in those sentences comes from underlying techniques that are controllable at system level, even if not every individual step is fully transparent.
The real risk is not in the sentence generator, but in the choices of makers. Bad choices lead to bad outcomes, sometimes driven by profit. So just ask your software partner to build something decent, instead of a legal disaster.