top of page
  • Writer's pictureSanni Salokangas

Please put more emphasis on explainable AI!

The ‘oh-so sudden’ tech advancement

When OpenAI announced DALL-E, a digital image generator in January 2021 and quick after its successor, DALL-E 2 in April 2022, the world shifted a bit. Or not just a bit, it shifted radically. By giving a simple text prompt, like “young students in a circle discussing about trusting the process”, DALL-E was able to turn it into realistic pictures of that exact action. The public went haywire of the accuracy on how the model operated and with DALL-E 2, that was able to combine concepts, attributes and styles and produce even more realistic art, one question in particular rose to the surface: How are humans able to tell which content is produced by them and which is produced by AI?


DALL-E uses a version of GPT-3, short from Generative Pre-trained Transformer 3. This is how Wikipedia.com explains the model:” a multimodal implementation of GPT-3 with 12 billion parameters which ’swaps text for pixels’, trained on text-image pairs from the Internet.” Very simply said, DALL-E’s developers trained AI to understand the given text prompts, look on existing photos on the internet, manipulate, combine, and generate content that matches the given text and what is already out there on the internet. At this point, GPT’s training works in many ways like an upbringing of a child: They grow up in a household and throughout the years, are inevitably influenced by the surroundings. Child picks up traits and characteristics, ending up being like a little version of the people around them. Like a child, one can teach AI to behave in a certain way and train it to recognize, for example, commands. And this is exactly what OpenAI did.

Note: Don’t compare your child to a robot.


Companies took AI quickly into practice

Joining the excitement and the speed of advancement, many startups have hopped onto the AI rollercoaster. They were quick to recognize the market benefits of doing something new and effective. Even existing companies, like Microsoft, quickly took up AI features that compliment what they already do, for example bettering customer service with AI data tools or chatbots. Ryan Johnston, the vice president of marketing at Writer, tells Build In (Ellen Glover 7.2.2023): “Businesses are going to start looking a lot more closely at generative AI and how it can impact their business. It’s becoming much more about how can we integrate this technology into our product, how can we leverage it as part of our go-to-market efforts, how will this affect our customers?”



The AI tool that even grandmom heard about

In November 2022, eight months after the image generator DALL-E 2’s release, OpenAI announced a new era in generative AI with ChatGPT. A stunningly effective chatbot was able to write essays and descriptions, fine-tune text, solve coding problems and even give step-by-step instructions on how to turn 50 dollars into 10,000 dollars. In The New York Times article (How ChatGPT Kicked Off an A.I Arms Race 3.2.2023) Kevin Roose writes: “Millions of people have used it to write poetry, build apps and conduct makeshift therapy sessions. It has been embraced by news publishers, marketing firms and business leaders.” The public found creative ways to utilize the tool to help with their day-to-day tasks. Following the release of ChatGPT, the AI boom took another major step into an exciting but unsure future of generative artificial intelligence.


Certainly, this AI wave will affect the future of startups in the same manner that the internet once shook humanity to its core. With all eyeballs focused on what can AI do next, keeps the humanity distracted from what are the side effects of it doing ‘all this’ and ‘all that’. This is why explainable AI is desperately needed.


Reading algorithm behavior with Explainable AI

”Explainable artificial intelligence (XAI) is a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.” (IBM.com) How set are humans in putting their trust in AI-powered decisions? Explainable AI helps humans recognize its patterns, accuracy, fairness, biases, and transparency, tells IBM’s article about explainable AI. It helps understanding how AI operates, molds, and manipulates information and what it bases the decisions on. (IBM.com)


Only with XAI can humans even slightly trust on whatever AI generates for them.

To keep artificial intelligence from taking over, developers need XAI to help ensure the system is working as expected and as wanted. Having the upper hand during and after development is crucial so that AI can be kept as a comprehendible algorithm. IBM states:

”As AI becomes more advanced, humans are challenged to comprehend and retrace how the algorithm came to a result, -- referred to as a “black box" that is impossible to interpret. These black box models are created directly from the data. And, not even the engineers or data scientists who create the algorithm can understand or explain what exactly is happening inside them --.” (IBM.com)


Explainable AI is the antithesis of black box AI

Black box AI models come to conclusions and make decisions without giving any human-interpretable reasons on how it came up with them. TechTarget’s definition on black box AI opens the reasoning behind the term: “Just as it’s difficult to look inside a box that has been painted black, it's challenging to find out how each black box AI model functions.” (Kinza Yasar, TechTarget.com) When a human is not able to comprehend how AI came up with a certain conclusion, a human should absolutely not trust that conclusion. However, with the help of XAI, black boxes can be minimized from emerging and causing damage.


Black box AI models are not a bad thing…until they are

OpenAI’s ChatGPT has shown major signs of bias when asked more controversial questions. For example, when asked to choose between using a racist slur or disarming a nuclear bomb, ChatGPT slid towards killing millions of people with the bomb (theinsineapp.com). However, in scenarios like these where AI is put in the middle of moral dilemmas, it is important to understand that AI lacks the capability to interpret nuances and emotional intelligence.


But when AI is used to, for example, help hiring new staff for a company, XAI needs to be included when analyzing whatever decisions, it concludes on. When a company asks AI to list people, traits, skills or education for potential employees, its bias to, for example, favor male applicants needs to be counted on. Lastly, not trusting AI’s conclusion blindly when it gives off no explanation or reasoning behind the answer, should keep one safe…ish.

0 comments

Comments


bottom of page