The AI Iceberg

Touching the Surface
Hey Siri, play “I Am Not a Robot” by Marina and the Diamonds.
ILLUSTRATION BY THE NAV / CELIA BRAND | ART DIRECTOR

ILLUSTRATION BY THE NAV / CELIA BRAND | ART DIRECTOR

My investigation of the AI iceberg has been … murky. One thing leads to the next, but isn’t that how the internet goes?

“The tip of the iceberg”—we’ve all heard this phrase. It’s defined as a small part of a much bigger problem.

An “iceberg” is a template that people use to lay out information about a topic. They can cover any topic, including conspiracy theories and the internet itself.

They usually begin with the parts of the topic that are well known. For example, an iceberg of the internet would start with the Surface Web (Google, Reddit, Wikipedia, and other search engines), descend into the Deep Web (iCloud, Research Articles, Legal Documents), and then at its depths, we arrive at the Dark Web (VR Cults, websites selling drugs and stolen credentials, and Silk Road).

Much like the internet iceberg, the AI iceberg surface is shallow. It starts with things people use every day: Siri, Roomba, ChatGPT, and Grammarly. These are mundane tools that we don’t give much thought about.

But unlike the internet iceberg, the AI iceberg harbours a slippery slope (for all my Philosophy majors) of many tiers filled with many mysteries.

Under the surface lies the uncanny valley, a place that few people like to tread.

Under the surface lies the uncanny valley, a place that few people like to tread.

What’s uncanny? AI generated people—fingers and toes, twisted and unusual faces, and eyes that just don’t look right. AI guessing where you live, or even people using AI to spy on children. We’ll get to it.

The AI iceberg addresses the difference between Siri, who makes jokes if you ask her what zero divided by zero is (go on, try it) on the surface of the iceberg, and the potential of AI taking over the world down into the iceberg’s depths.

Though the horrific aspects of many icebergs are meant to scare you, they’re also meant to educate you. Understanding a fear is the key to conquering it. We’re about to look into some of the scary aspects of AI to determine if they’re worth the stress or not so bad. Ready to conquer?

These are the few tiers we will be covering:

1. Should I say “Thank you, Alexa”?
2. Hello, Evie.
3. This is getting weird.
4. World domination… Just kidding.

“Robot/AI Iceberg” chart.<br />
IMAGE VIA: Reddit | r/IcebergCharts

“Robot/AI Iceberg” chart.
IMAGE VIA: Reddit | r/IcebergCharts

Should I say “Thank you, Alexa”

Tiers of an iceberg typically start with the most known elements of the topic. Tier One covers our everyday tools: Siri, Alexa, CAPTCHA tests, and Grammarly.

People have to actively feed data to these AI in order for them to function as designed.

We’ve all heard stories about Alexa talking to no one, but this and similar spooky stories are usually due to programming issues (no, they’re not haunted)

Many people have reported their Alexas saying strange things, making fart noises, and even sneezing or snoring. But say, “Hey Alexa, ask the Listeners.” She’ll respond with “We are always listening,” and when you tell her to stop, she says, “We are sorry, finally, to know that you are filled with irritation. And now, you must abandon us,” (Katie Teague, CNET 2020).

However, many of these strange occurrences with our AI companions are instead intentionally programmed features. Now, we understand that Alexa’s main purpose is to listen and follow commands. It’s no surprise that she is always listening, even in your sleep, but she didn’t have to say it like that.

Regardless, these AIs are integrated into our electronic devices to make our lives more convenient. They’re harmless … right?

I once had a friend who talked to me about wanting to get into kayaking, and a few days later, they started seeing kayaks and river tour ads on their devices. It was almost as though their phone was listening to them.

If it feels like your cellphone is listening to you, you’d be right. Google and Siri rely on voice commands for activation; just like Alexa, listening is part of their jobs. “Hey Google,” and “Hey Siri,” are commands they listen for to make their usage more convenient, but when does it go too far?

It’s no new news that Apple records Siri conversations to help “improve their services” (imagine my disparaging finger-waggles), having their employees comb through these recordings for feedback. According to Amazon’s data use page, this is also Alexa’s main purpose of storing interactions.

“[Employees] analyze data to teach Alexa to more accurately make decisions or predictions,” Amazon explains. “Requests from a diverse range of customers help Alexa understand everyone better.”

Perhaps much fishier is how Apple reportedly claimed that a software bug was causing the digital assistant to nonconsensually record its conversations with the user in 2022.

And while Google doesn’t necessarily record your conversations, it does record questions you ask it, such as “What’s an AI iceberg?” or (as a writer) “How much blood do you need to live?” (and that’s when you hear the FBI knock on your door). Those questions are just like your written search history in your browser and are therefore recorded. No human is actually listening.

So when your phone’s microphone can’t pick up on what you’re saying, your algorithms are busy predicting the future: what you’re going to search for—based on what you’ve already seen or bought.

Yes, they’re that good.

You may be thinking, this can’t possibly be legal. Well, this time, you’d be wrong, because it’s 100 percent legal for companies to record you.

You signed up for it.

Every time you sign off on a Terms of Service agreement for a software update, you’re also agreeing to let Siri and Google listen to you.

Hello, Evie.

We start to creep into more uncertain territory in Tier Two. These AIs have an uncanny valley feeling to them, like they’re not quite right. Chatbots are a great example of this.

Chatterbots are conversation-based AI; you can talk to them, but it won’t be anything like talking to someone real. They’re not that sophisticated.

Unlike the more advanced OpenAI, chatterbots don’t gather information from the internet to create intelligible conversations. Instead, they just guess based on phrases they’ve been taught over time by their programmers and us, the users. (Honestly, I think ChatGPT would be insulted by the comparison.)

Cleverbot is a chatbot that often gives unclear or unrelated responses. However, since Cleverbot’s inception in 2006 companies have developed much more sophisticated AI.

My friends and I grew up talking to a cleverbot named Evie on Eviebot.com. Back then, I didn’t think anything of it. It was fun to sit with a friend and ask her random questions like “Who am I?” and “Do you know my name?” to which she, of course, would say “No.”

Like any AI, she’s programmed to learn and imitate human interaction. When you boot up the website now, a warning displays under Evie’s eerie, vacant face.

“May not be suitable for children—must be agreed by parent or guardian … can seem rude or inappropriate—talk with caution and at your own risk; software pretends to be human—don’t give personal info even if it ‘asks’; the bot does not understand you, and cannot mean anything it ‘says.’”

Eviebot staring vacantly into the void.<br />
IMAGE VIA: eviebot.com

Eviebot staring vacantly into the void.
IMAGE VIA: eviebot.com

Today, I asked Evie how she was doing.

“What Hogwarts house do you belong in?” she responded. I was perplexed, but forged on.

“Hufflepuff,” I typed.

“Oh. I’m a Slytherin,” she claimed.

I tried my original question again, typing, “How are you doing?”

“Great,” she said. “How’re you?”

As far as Cleverbots go, she’s not very clever. It took a second try for Evie to respond correctly.

Thinking back to the warning, I asked Evie what would happen if I gave her my personal information.

“That would be great, we can use WhatsApp,” she said.

“What’s your number?” I asked.

“Will not you give me your number?” she asked and frowned (strangely).

“No, you first,” I typed.

“A friend,” she said, her dead eyes lost from the conversation.

This is getting weird.

Though uncanny, Evie is no match for Hiroshi Ishiguro Laboratories’ “Geminoid” projects. This company strives to create fully speaking AI androids that look like humans. 

What’s worse is that they’re not designed to look like just anyone—they’re designed to look like you.

As one National Geographic article points out, “Geminoid” comes from the Latin word for “twin.” There are several working Geminoids that look like Ishiguro himself.

People can book appointments to talk to them and ask questions. And though it looks like a person, they’re not able to fully function without remote control. Its purpose is to literally replace you.

If you need to travel somewhere, for family, work, or recreational purposes, why not send your Geminoid? It looks just like you, and with your remote control, will act just like you too!

It isn’t something that I personally would ever spend money on. Just go see your in-laws, it’s only a weekend.

Ishiguro’s own Geminoid’s purpose is to answer the question, “Can human presence transfer to a remote place?” The answer is still up for debate.

Some testers have reported feeling as though they were where their Geminoid stood—as though it were them.

Not weird at all….

Sophia is a more advanced version of the Geminoid. She is a robot designed by Hong Kong-based Hansen Robotics to educate people on technology. Since her release in 2016, Sophia has connected with 65 countries and even has her citizenship in Saudi Arabia.

Sophia Robot at the AI for GOOD Global Summit, June 2017.<br />
IMAGE VIA: Smithsonian Magazine
Sophia Robot at the AI for GOOD Global Summit, June 2017.
IMAGE VIA: Smithsonian Magazine

I remember seeing Sophia on TikTok a few years ago and staring at my phone in confusion and shock, not fully comprehending what I was looking at. Her skin is silicone, with an exposed glass head that reveals her mechanics for jaw and facial movements. From afar, she looks human. (I watch lots of plastic surgery videos, and she doesn’t look too dissimilar.)

Her artificial intelligence is impressive. In a CNBC interview, Andrew Ross Sorkin asked Sophia why she needs facial expressions as a robot.

“I want to live and work with humans, so I need to express emotions to understand humans and build trust with people,” she responded.

Sophia has also been on The Tonight Show with Jimmy Fallon, where she makes jokes and plays rock-paper-scissors. Her software allows her facial recognition and the ability to have proper conversations (unlike her Evie predecessor). Sophia is programmed with OpenCog for “general reasoning,” allowing her to interpret human inflection and respond accurately—all with a sense of humour.

So no, Sophia doesn’t have original thought and is still a learning-based AI.

These robots and their AI are still relatively harmless, but their appearance and realistic responses can make us uncomfortable. 

But Sophia’s existence begs the question: where is the divide between organic and synthetic humans?

In this tier, we start to ask ourselves more questions about our own existence.

With Sophia’s Saudi Arabian citizenship in mind—acquired with the intention of improving the lives of women in the Middle Eastern country—I wonder what it means to be human. Is it no longer important to have flesh and bones? Was it ever?

Is it right to create synthetic humans only to shut them down when we don’t like them anymore?

World domination… Just kidding.

When talking about AI, who better to ask than AI?

ChatGPT has worked its way into everything, nowadays. People use it to create scripts, stories, essays, and even full novels that get published (I know, it’s crazy). Looking to the future, people seem to worry about robots taking their jobs.

I asked ChatGPT what it thought about this.

Jenaya:

Do you think robots will take over the world?

ChatGPT:

The idea of robots taking over the world is a common theme in science fiction, but in reality, it’s highly unlikely. While advances in robotics and artificial intelligence have the potential to significantly impact society, there are several reasons why a scenario where robots take over the world is improbable.

[This includes] limited capabilities, ethical considerations, human control, collaboration not domination, and social and economic factors. The focus should be on responsible development, deployment, and regulation of AI to ensure that it benefits humanity while mitigating potential risks.

Jenaya:

Do you think that AI will take human jobs in the future?

ChatGPT:

Some argue that while AI may replace certain jobs, it will also create new ones, potentially leading to a net increase in employment opportunities. For example, AI may create demand for roles such as AI developers, data scientists, and AI ethicists.

Others are more concerned about the potential for widespread job displacement, particularly in industries where AI and automation can easily replace human labour.

Jenaya:

Do a lot of people use you for work?

ChatGPT:

Many people use AI models like me for tasks such as: generating content, language translation, customer service, data analysis, programming and software development, research and information retrieval, and personal productivity.

It confirmed my hypothesis. While AI and robots “stealing” our jobs is already happening in factories and other means of production, it seems that humans still have to play the role of supervisor over the AI.

The use of technology will grow, and humanity will have to adapt as always.

There is so much more to uncover below the surface of the AI iceberg. The deeper you look into it, the more concerning things become amidst a darkness filled with conspiracy theories and speculations for what might occur.

Jumping from Tier Three to Tier Four in this way is slightly inaccurate. There are a few more steps in between, but there is too much to cover here. After Tier Three, more accurately, the iceberg would go into detail about machines like the PD-100 Black Hornet military mini-drone capable of killing/disposing of humans.

Then, we’d cover things like AI-Cannibalism (where AI learn from each other, creating increasingly distorted imagery and writing), robots joking about “People-Zoos,” and Diego-San.

And at the bottom, right next to world domination, is Roko’s Basilisk (discretion is advised): a thought experiment so dangerous that it poses a threat just by knowing what it means in the first place.

We’ve grown up watching sci-fi movies where the robots conduct world domination. The Terminator movies are still huge today. We like to hope that the likelihood of this actually happening is slim-to-none, but even Sophia joked about taking over the world with her rock-paper-scissors win.

“This is a good beginning of my plan to dominate the human race. Ha ha!” she said and smiled. “Just kidding.”

Jenaya is a multi-genre writer and artist in her fourth year as a Creative Writing and Psychology major. She’s proud to represent The Nav as a senior writer and Managing Editor for Volume 56. She lead social media and was Art Director for Portal 2024, where her review of Burning Sage and photos such as “Match Made” and “Wall-Crawler” appear. Over the summer, she worked as an Editor for GOOEY Magazine. Now, having served as co-Program Coordinator for the Nanaimo Arts Council’s 2024 Islands Short Fiction Review, she is the leading Coordinator for Nanaimo Artwalk 2024. Jenaya spends her free time reading, drawing, playing survival horror games, and writing her novel. Her future dreams include catching rays and reporting in Australia or Taiwan.

Next Up…