Artificially Unknown

Artificial Intelligence, better known as AI, has seemed to lie on the edge of fantasy and reality for quite some time. It is defined as the simulation of human intelligence by machines and specific computer systems. With the invention of the computer in the early half of the 20th century, the idea of artificial intelligence became a great possibility. Computer pioneer Alan Turing did some of the most influential work involving artificial intelligence, with tests scoping the limits of technology in the period. The ‘father of artificial intelligence’ John McCarthy, worked with Turing to create a technological development that allowed the user to explore every possible sequence of data. The concept alone has been incorporated in many fictional works, from early examples like Samuel Butler’s Victorian novel, Erewhon to sci-fi thrillers like 2001: A Space Odyssey. AI is often depicted as being intelligent and rivaling humans. However, physics teacher Matthew Gartman contests this persona and explains his point of view.

“AI as it stands is not actually that smart. The way it works is by running millions of trials until something works. For example, if you have a Chess AI, when it first starts it has no idea how to play chess. It just does things randomly, until eventually good moves are created. A common misconception is that it is a fully computational person-like intelligence. However,  they are currently very limited where they stand today. I can at least say that I would not want AI in charge of weapons, not because of a terminator scenario, but because it can have errors just like humans. Ultimately, I would like to see AI as more of an assistive tool,” said Gartman. 

AI has been actively used in workplaces since the early 21st century. Since then, new technological advancements within the field occur frequently enough to point that they warrant less job opportunities. According to Zippia, at least 25% of US jobs are in danger due to automation; with occupations like clerks and technicians being at a high risk. Although Senior Zachary Rhodes is open to the rise of AI, being a frequent user himself, he believes that there is a boundary to be drawn when it comes to automated jobs. 

Courtesy of Starryai.com from Allie

“I use quite a bit of AI software. I believe it is beneficial. People think it will do everything for us, but that is the point, to make things nice and simple so you can use more brain power for other tasks. In the future they will most likely perform more efficient jobs than humans can. However, I do not believe that AI should cross the line of judging morality and law like in a courtroom. It does not have the emotional weight to determine ‘guilty’ or ‘not guilty,’” stated Rhodes.

The journey to recreate humanity in technology has been a task developers struggled with for years. While the first ‘emotion’ robot, Kismet, had its flaws, it could successfully have social and sentimental interactions with humans in 1998. A recent creation, ‘Sophia’ is able to imitate humans facial expressions, language, and speech skills. She is designed to get smarter over time. Senior, Delta Smotherman finds the undertaking of recreating emotions to be impossible for artificial intelligence.

“I have very mixed feelings about AI. While it can be super cool to use, it can be very harmful to people. It is taking away so many opportunities from people without hesitation. There are less jobs and an increasing population. It is not a good combination. While they can take valuable positions away from us, it is important to realize that AI cannot fully replicate a human because they do not have actual emotions. They can maybe be programmed to simulate it, but it would not be real. It would not have a soul,” said Smotherman. 

Junior, RaShawn Steward holds the opinion that AI will grow into a harmful in the coming years. 

“AI is still coming into fruition. It is helpful for what it does as of right now. It is reasonable to think that there will be many robots in the future which will affect people’s jobs. Because if we have robots, ‘then what would we need people for?’ I think this will be very damaging to many around the globe. I mean it is already occurring with artists and writers,” stated Steward. 

Recently, the AI art and writing generation have been popularized on the internet, enabling users to create original images or texts with simple descriptions. Websites like JasperArt, NightCafe, and even Google’s own AI: Deep Dream generator, pull from sources to make digitized art. Junior Audrey Probst, who has had experience creating artwork for most of her life, criticizes the software as it negatively impacts artists. 

“AI art takes away the actual creativity that stems from someone’s head. It is almost like taking away other people’s art. Art is so much more than computer generated drawings, and I can imagine how this is affecting individual artists with already limited commissions. It is sucking up the economy. The way AI is going is a little creepy. I have seen so many sci-fi movies on this topic, and we are doing it,” said Probst.

The subject of plagiarism has been a vital topic in the conversation of AI art and writing. So much so, that artists and teachers alike have staged mass protests against AI-generated work. This is because the AI’s are trained based on human-made work, and with the addition of AI-made essays, plagiarism becomes more accessible throughout education. Gartman adds his own input on the future of AI art and how it should function.

“The way AI is going could be completely fine. The issue is if it is too good and also how it works. If it does that by viewing other people’s artwork and turning it into its own, then it is essentially stealing their hard work and efforts. However, if they can refine the program enough to the point where it ultimately generates its own art instead of taking others artistic efforts, then it is more acceptable. It still would put a lot of artists out of business, so it is definitely an issue to be addressed. At the very most, AI art should be illegal for companies to use,” stated Gartman.

Voice-recognition technology goes almost hand-to-hand with AI as it is built to encompass both comprehensive and responsive communication. Based on one’s words, the AI can appropriate a response to match with it. According to Marc Rotenberg, founder of the nonprofit center for AI and Digital Policy, voice-activated devices like Alexa and Siri are a ‘ticking time bomb with a collection of voice recordings.’ This idea has struck some fear about AI, as data being leaked could lead to near-disaster. Junior, Jonothan Renert understands people’s fears, however, speaks with acceptance of life alongside AI.

“I do not think there is much to be scared about regarding AI. It could go wrong, but I do not think it will go as bad as people believe it will. It is not going to take over the world. That is pure fiction. There will be safety measures within more advanced AI’s that will come in the future. There are also clear-cut security measures protecting our data. We will be using it a lot, so we have to get comfortable with its existence. We already use Siri and Alexa on a daily basis, which are some of the clearest examples of AI found today,” said Renert.

AI is still considered newly developed. The limits of today’s technology have perhaps not met its conclusion. New possibilities for the usage of artificial intelligence are still being made, especially with automation. The effect of this technology may be one of the pros and cons. AP Euro teacher Rena Long speaks of the unknown involving the future of AI, with both concern and hope.

“When the internet was a brand new concept, nobody knew what would come of it. Nobody could ask ‘when should we stop’ because we did not know what it would turn into. The same could be said for AI because this is only the beginning. Since its rise in popularity, I myself have found various uses for it. Like researching for my new class AP comparative government, or going about small tasks that build up. I even use it for finding the right wording when I have trouble writing an email. I doubt it will become all-supreme, at the end of the day humans will be the ones controlling it. Yet I do fear that people will stop being authentic because of it; That it will take away people’s voice and increase laziness. I am worried that people are going to become even less connected. We will just have to wait and see,” stated Long.