Skip to content
Home » Blog » Technology » Current Artificial Intelligence is Not What You Think It Is

Current Artificial Intelligence is Not What You Think It Is

5 minute read

There’s a lot of people with big feelings around artificial intelligence (AI) due to the recent launches of products such as Chat GPT, Google’s Bard, OpenAI, and DALL-E, and for a very good reason (which I’ll get into another time).

The Myth

TL;DR: artificial intelligence can take a short prompt and produce new work.

Some people believe that Chat GPT and Bard can accept a short prompt and then generate paragraphs of new content, that self-driving vehicles are fully automated where human drivers can take their hands off the steering wheel, or that the AI generated images are brand new works of art.

The (Hyped) Concern

TL;DR: this will replace humans! People will lose their jobs to robots!

We’ve already seen less people employed at fast food restaurants due to online ordering and digital kiosks installed at locations. We’ve also seen less people working the cash registers at grocery stories and replaced with self-serving checkouts. It’s only a matter of time before we’re replacing writers and programmers too!

The Reality

TL;DR: Artificial Intelligence, as companies are using it today, is false advertising. It’s a buzzword intended to confuse you, to intentionally make you associate the science-fi version of AI (like SkyNet from the 1980’s movie The Terminator or VIKI, Sonny, and the other robots from the 2004 movie I, Robot) instead of what it really is: a pattern imitation tool.

What AI Can Do

These tools are great if they’re used correctly, as in, processing input by asking it to identify a specific pattern and then return it in a specific way. This is great for summarizing large amounts of text, combining content, and optimizing code. AI, as we currently know it, is advanced machine learning and can be a great tool for accessibility like automatically condensing articles into key points for those who find walls of text challenging to read, converting audio files into written transcripts (but would still need to be edited because it’s not always accurate), or generating image descriptions for people who rely on screen-readers or simply can’t view the image.

What AI Can’t Do

At best, it produces funny and silly stuff for entertainment. At worst, it gets people killed. People have died because drivers trusted Tesla’s autopilot beyond its capabilities. AI is also getting people (either knowingly or accidentally) to violate copyright laws and circulate misinformation. It’s also giving spammers a better ROI.

Point being, this tool can not create anything new. It also cannot tell the difference between fact and opinion. It doesn’t have lived experience like a real human, so that context is completely lost.

The (Real) Concern

Artists, writers, and more are suing because companies are using their work to train their AI models without their consent and without payment. Therefore their work, because it’s an imitation tool, is being illegally modified and marketed as someone else’s name.

While it can be helpful in digital marketing to use it to write an SEO description based on the article’s content, it can write inaccurate and even made-up information if asked to write a whole article based on the description. It’s the same as if you just used your phone’s text-predictor when texting: a nonsensical word salad.

The Conclusion

Companies make money from hype, so they’re either selling you on the idea of fear (the hyped concern) or a sci-fi fantasy. Technology is not there yet, and apparently, neither are ethics.

Sources

Emily M. Bender, Timnit Gebru, Angelina McMillan-Major, Shmargaret Shmitchell, “On the Dangers of Stochastic Parrots Can Language Models Be Too Big?” (Mar. 1, 2021). In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’21). Association for Computing Machinery.

Jon Stokes, “Chat GPT Explained: A Normie’s Guide To How It Works” (Mar. 1, 2023). JonStokes.com.

Stephen Wolfram, “What Is ChatGPT Doing… and Why Does It Work?” (Feb. 14, 2023). Stephen Wolfram, LLC.

Dmitri Brereton, “Bing AI Can’t Be Trusted” (Feb. 13, 2023). DKB Blog.

Don’t believe ChatGPT – we do NOT offer a ‘phone lookup’ service” (Feb. 23, 2023). OpenCage.

Emily Olson, “Google shares drop $100 billion after its new AI chatbot makes a mistake” (Feb. 9, 2023). NPR.

Julia Love and Davey Alba, “Google’s Plan to Catch ChatGPT Is to Stuff AI Into Everything” (Mar. 8, 2023). Bloomberg L.P.

Alex Weprin, “Want to Impress Wall Street? Just Add Some AI” (Mar. 8, 2023). The Hollywood Reporter.

Amanda Hari, “Surveillance video shows self-driving Tesla crash on Bay Bridge” (Jan. 12, 2023). KRON 4.

Volvo Auto-brake Mishap – Part II” (Sept. 13, 2016). GringoVideos.

CNN tests a ‘full self-driving’ Tesla” (Nov. 18, 2021). CNN.

Tesla crashes into child dummy as auto-break test fails” (Aug. 11, 2022). The Independent.

Max Chafkin, “Even After $100 Billion, Self-Driving Cars Are Going Nowhere” (Oct. 6, 2022). Bloomberg L.P.

Even After $100 Billion, Self-Driving Cars Are Going Nowhere https://bloom.bg/3U5mZKH

Hyunjoo Jin, “Tesla video promoting self-driving was staged, engineer testifies” (Jan. 18, 2023). Reuters.

Orlando Mayorquin, “Elon Musk’s Tesla accused of fraud, false advertising of ‘autopilot’ technology in lawsuit” (Sept. 15, 2022). USA Today.

11 more people killed in crashes involving automated-tech vehicles” (Oct. 19, 2022). The Associated Press.

Richard Lawler, “Tesla recalls 362,758 vehicles equipped with Full Self-Driving beta for ‘crash risk’” (Feb. 16, 2023). The Verge.

Kevin Roose, “A Conversation With Bing’s Chatbot Left Me Deeply Unsettled” (Feb. 17, 2023). The New York Times.

Cade Metz and Deisuke Wakabayashi, “Google Researcher Says She Was Fired Over Paper Highlighting Bias in A.I.” (Dec. 3, 2020). The New York Times.

Billy Perrigo, “OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic” (Jan. 18, 2023). TIME USA, LLC.

Irina Ivanova, “Artists sue AI company for billions, alleging “parasite” app used their work for free” (Jan. 20, 2023). CBS News.

Shubham Agarwal, “Audiobook Narrators Fear Apple Used Their Voices to Train AI” (Feb. 14, 2023). Wired.

Richard Waters, “Man beats machine at Go in human victory over AI” (Feb. 19, 2023). Ars Technica.

Katherine Tangalakis-Lippert, “Marines fooled a DARPA robot by hiding in a cardboard box while giggling and pretending to be trees” (Jan. 29, 2023). Insider.

1 thought on “Current Artificial Intelligence is Not What You Think It Is”

  1. One other website/link/article which was helpful:

    HOW PARENTS CAN TALK ABOUT ARTIFICIAL INTELLIGENCE

    [it was in the New York Times – and I learnt about it from Annabel Hodkins who makes a regional Australian magazine called GALAH]

    or it is also called THE AI CHATBOTS HAVE ARRIVED – TIME TO TALK TO YOUR KIDS

    The article was by Christina Caron. And it was published 22 March 2023

Leave a Reply

Your email address will not be published. Required fields are marked *

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.