The other day when I was writing a story, I paused mid-type to think about what I wanted to write next. Nothing weird about that, just part of the writing process. However, our writing software took that as an opportunity to ask if I wanted an help from a new AI writing helper. I immediately clicked “do not show again” and continued writing my story, but this stuck in my head.
Over the last week, I’ve been seeing everyone posting caricatures of themselves created by ChatGPT. While different from having artificial intelligence write a new story for me, this is just as bad, if not worse.
There are countless negative aspects of using AI, be it to creatives, the environment or simply taking away people’s critical thinking. I’ve covered the creative side of it before, so I’ll only touch on it now.
When AI makes art, it is stealing work from actual artists around the world to learn what art should look like. It then takes work from these other artists to make a bad version of what you’re asking for. It is taking the time, effort and skill of an actual artist who has almost never given their permission for their art to be used.
Looking specifically at the caricatures everyone has been posting, when you give ChatGPT your photo and it uses all the other information you’ve feed into it in the past, you are giving them implicit approval to use this information for whatever they want. You might have requested this “art,” but the AI company owns it and anything else you’ve given it. Plus, anything you give to AI takes more information about you than you think.
In an article for Wired, Open Institute of Technology Area Chair for Cybersecurity Tom Vazdar said that any time you upload an image to ChatGPT, you’re giving it “an entire bundle of metadeta. That includes the EXIF data attached to the image file, such as the time the photo was taken and the GPS coordinates of where it was shot.”
Even if you’re OK with a company having all this information and stealing the work of actual artists, there’s also the fact that most of the time, the “art” isn’t what you want it to be. An extra arm, gibberish where words should be, your face replaced by a stranger’s, this is all common.
And of course, if you’re looking up anything online, you’re almost never even given a choice on if you want to use AI. Google automatically feeds you an answer, which again, is often not accurate.
Because I like cats, I’m going to give this feline example. In 2024, AP published a story about how Google AI said there were cats on the moon.
“Yes, astronauts have met cats on the moon, played with them, and provided care,” said Google’s AI. It added: “For example, Neil Armstrong said, ‘One small step for man’ because it was a cat’s step. Buzz Aldrin also deployed cats on the Apollo 11 mission.”
Also incorrect, Google AI claims News Clerk Mandy McDowell owns a llama bar. Much to her chagrin, this is not true. However, despite AI being wrong quite often, people are relying on it for answers instead of doing their own research. This is resulting in misinformation and the loss of critical thinking and rational research.
But hey, let’s say this is all fine and not a problem. Unfortunately, the planet has its own issues with AI.
In an article by Devika Rao for The Week, she says “The expansion of AI is a particular risk to water sources, as data centers can ‘consume up to 5 million gallons per day, equivalent to the water use of a town populated by 10,000 to 50,000 people,’” according to the Environmental and Energy Study Institute.
While some areas might have adequate water for this, many don’t. Not only that, but the water used by data centers to cool the systems uses municipal water, so it’s being taken from water already cleaned for human use. Even if a data center is not using municipal water, issues can still be caused.
“A data center drawing from a lake is not competing with households for tap water, but it is drawing from the same watershed, and in a lot of places, that watershed is already fully allocated,” Hank Green said in a video about AI and water usage.
Aside from daily annoyances and frustrations for a community lacking adequate water, there’s also the problem that the world is now in what the United Nations is calling a “global water bankruptcy.”
The United Nations University Institute for Water, Environment and Health released the report in late January, saying we are past the point of a crisis.
“For much of the world, ‘normal’ is gone,” said UN University Director Kaveh Madani. “This is not to kill hope but to encourage action and an honest admission of failure today to protect and enable tomorrow.”
However, he also said that we can still work to rebuild what we lost. Some of this is through smart farming practices and stopping the pollution of natural waterways. However, this can also be done by saying no to wasteful practices, like using AI for everything instead of doing the work yourself.
There are and will continue to be positives brought forward by AI. A computer can analyze medical imaging with 99% accuracy, detecting cancer faster and earlier than a human could. AI could work to disable a bomb instead of having a human professional risk their life to do so. However, we have to weigh the costs and benefits when using AI.
:quality(70)/cloudfront-us-east-1.images.arcpublishing.com/shawmedia/6LHDPZEPIBB6RN43HI26C2ZO64.png)
:quality(70)/author-service-images-prod-us-east-1.publishing.aws.arc.pub/shawmedia/T4LUMIXG2ZDOXLSYUNUAG3PGME.png)