Advocates, skeptics of rapidly advancing technology eye regulations, revisions 

This image, “Renaissance Painting of Politics,” was created with the assistance of DALL-E

By Dr. Dakota Thomas and Caroline Wills 

Computing changed the world. Twenty years ago, many jobs did not require use of computers. Now, most require them for key functions — or to even exist. On its own, computing is changing rapidly. The advent of machine learning and artificial intelligence may change humanity’s relationship with technology forever, and is expected to have important consequences for the economy and society of the near future.

Until recently, computing relied on providing computers with instructions. In traditional programming, a human writes a program and the computer executes it exactly as instructed. A form of computer programming, machine learning is a specific approach to achieving AI. It moves beyond human direction; computers are now taught to create their own instructions using specific protocols. Currently, little government oversight exists for this emerging technology.

As part of machine learning, a computer learns what to do without explicit instruction from a human. Artificial intelligence, or AI, is the simulation of human intelligence in machines, teaching a computer to think, learn and perform tasks like a human. AI is the broader concept of teaching machines to perform tasks that would normally require human intelligence, while machine learning uses data and certain techniques to train computers to do such tasks. Using a learning algorithm, computers create a model from training data in order to make predictions or decisions that were not explicitly programmed. A learning algorithm can achieve more in less time compared to human programming alone through the automation of complex, time-consuming tasks.

An algorithm is a sequence of instructions or operations used to solve a problem or perform a calculation, while training data is any form of data that can be used to teach a learning algorithm what to do. 

As a result, machine learning drastically increases productivity and requires less human thought and labor. Consider, for example, spam filters for emails. Although a human programmer could feasibly update programming to block spam, spammers can quickly adapt and bypass filters. By contrast, a computer program is continually trained with data generated by users flagging spam messages and can easily create a comprehensive filter and automatically update its approach as threats become more complex.

Machine learning and AI are advancing so rapidly that language and art AI can mimic human conversation and human generated art. ChatGPT, a language model AI chatbot, was designed by OpenAI to perform writing tasks like a human would as well as to respond to questions and commands. When it debuted last year, people were spooked by how much it can do and how well it mimics human writing. In fact, the chatbot has passed graduate level writing exams in multiple fields including the U.S. medical licensing exam and several law classes.

Writing a program similar to the one allowing ChatGPT to respond is extremely complicated and time-intensive for a human programmer. But since ChatGPT has been taught how to respond to requests, it responded in seconds and can answer many other kinds of requests as well.

Q&A with ChatGPT

“Can you provide an example of AI for a website?”

ChatGPT’s Response:
There are many ways that artificial intelligence can be used on a website to improve the user experience and make it more interactive and engaging. Here are a few examples of how AI can be used on a website:

  • Chatbots – Chatbots are computer programs that use natural language processing to simulate a conversation with a human. They can be used on a website to provide customer support, answer frequently asked questions, or guide users through the purchasing process.
  • Personalization – AI can be used to personalize the content and recommendations shown to a user on a website based on their past interactions and behaviors. This can help to create a more personalized and relevant experience for the user.
  • Image Recognition – AI can be used to analyze and classify images on a website, for example to tag photos with relevant keywords or to provide alternative text for visually impaired users.

These are just a few examples of how AI can be used on a website. There are many other potential applications of AI in this context, and the use of AI in web development is an active area of research and development.

As with any new technology, the advancement of AI and machine learning tools has its shortcomings. Among them is AIgiarism, plagiarism enabled by AI programs, along with biased algorithms due to flawed training data, potential job displacement as result of automation, and cultural impacts on how creativity and art are viewed.

The concept of machine learning can be misleading. ChatGPT and other AI do not actually know anything in the ordinary sense of the word — yet. Instead, these tools have had a vast amount of information fed to it by humans. For ChatGPT, that data comes from online text that is repackaged and redistributed.

Since the authors of that online text probably did not agree to having their text used to train ChatGPT, and the chatbot does not cite its sources, it’s difficult to know how ChatGPT learns all the information it generates. It also leads to questions about the information’s accuracy and whether the chatbot is plagiarizing others. According to Slate, there are published examples of chatbots copying other journalists’ work for use in another publication’s article.

Setting aside the ethics of training data, AIgiarism has already created concerns in the educational system, where teachers and professors recognize that students can use AI, like ChatGPT, to do their homework. There is a plagiarism checker in development specifically designed to try to determine if ChatGPT wrote a paper rather than a student. Educational institutions nationwide are creating academic task forces and regulations to address the emerging issues of AIgiarism. To their credit, AI developers addressed the issue with a new tool.

Scott Aaronson, a researcher at OpenAI, revealed on his blog that his primary project had been “statistically watermarking the outputs of a text model like GPT,” making it harder for their ChatGPT algorithm to pass off information as if it came from a human.

“Whenever GPT generates some long text, we want there to be an otherwise unnoticeable secret signal in its choices of words, which you can use to prove later that, yes, this came from GPT,” Sorenson said. “This could be helpful for preventing academic plagiarism, obviously, but also, for example, mass generation of propaganda.”

The problem of plagiarism is not unique to language AI, as similar issues with AI generated art have also begun causing controversy. Last year, the winner of the Colorado State Fair’s annual art competition used AI to create their winning image, sparking a heated discussion on the impact of AI on human creativity, copyright and artistic expression. As with language AI, human generated art is used to train art AI, and typically the creators of that art are not asked permission or compensated for their work. Although AI is already drastically changing creative processes and artistic expression by convincingly mimicking human generated art, it may be difficult for AI to replace human imagination and innovation in the future. AI can effectively mimic human creativity and expression, but it is unclear if it can replace artists altogether.

Other professions may be even more vulnerable to displacement by AI automation. This could contribute to higher unemployment and other social and economic consequences. According to the 2020 World Economic Forum Future Jobs Report, AI could displace up to 85 million jobs globally — many of which are expected to be blue-collar positions. The report also revealed as many as 97 million jobs could be created; however, the jobs created will likely be white-collar roles such as programmers. By 2025, the report speculates that employers will equally divide work between human and machines; roles requiring human skills will rise in demand; information and data processing will be the primary task of machines; and administrative tasks and routine manual jobs will be the primary responsibilities of white- and blue-collar workers.

Since AI relies on human generated data, it can exhibit the same biases as humans. Commonly called algorithmic bias, it has been increasingly creeping into areas of human life. Beginning in 2014, Amazon tried utilizing AI to build a resume-screening tool to make hiring more efficient. The algorithm was trained using resumes collected by Amazon. Many of those resumes were from men, and due to the biased input, the system learned to discriminate against women applicants.

The field of AI is advancing so quickly that some researchers estimate an AI with superhuman intelligence will be created this century. Despite the rapid pace of technological advancement and innovation surrounding machine learning and AI, few policymakers are well versed on the subject. One notable exception is U.S. Rep. Don Beyer, who, at age 72, enrolled in an AI undergraduate program at George Mason University.

California Rep. Ted Lieu introduced a congressional resolution written by AI in January, directing the U.S. House of Representatives to delve into the challenges of artificial intelligence and the need for increased regulation. Consequently, Lieu is the first member of Congress to introduce AI authored material. In an op-ed published by The New York Times ahead of the release of his resolution, Lieu wrote: “We can harness and regulate A.I. to create a more utopian society or risk having an unchecked, unregulated A.I. push us toward a more dystopian future.”

At a state level, Massachusetts Sen. Barry Finegold used ChatGPT to draft data privacy legislation. Finegold and his office gave ChatGPT broad prompts, leading it to create a nearly complete document. Finegold filed the bill to show the benefits of AI, while also highlighting the need for guardrails to protect public safety.

While billions are spent every year advancing AI capability, only $50 million was spent in 2020 on reducing its potential risks. Added oversight and regulation could ensure that AI assists humans moving forward in creating a sustainable and inclusive future that avoids the potential risks of AI proliferation. By investing in applied AI research, promoting transparency and accountability in AI, developing comprehensive ethical guidelines, and proposing regulations and restrictions on AI, policymakers can work to smooth the transition to a world with more AI in it.

Machine Learning Techniques

Beyond the general styles of machine learning, there are also specific techniques used. These techniques include neural networks, decision trees and regression. 

Neural networks seek to replicate the human brains neural connections. Using models, artificial neurons are created that communicate with one another via a signal. Under certain conditions, signals become stronger or weaker, and the overall network learns to manage new information. For example, you can feed an artificial neural network images labeled “cat” and “not cat.” The network will try to learn how to classify images as cats. The network can then try to identify cats in new images it hasn’t seen before.

Decision trees are another technique for supervised learning. Using several input variables, a decision tree attempts to classify relevant features to predict an outcome. This approach is one of the simpler types of machine learning with decisions that are more easily understood.

A more recently identified form of supervised learning is regression, which is the use of statistical modelling to estimate relationships between one or more output variables and a set of input variables. Regression can be used to make predictions and forecasts, as well for inference of relationships between variables. For instance, if the economy improves by X%, inflation will likely change by Y%.

Genetic algorithms use a simulated evolutionary cycle to solve problems. It does this by first creating a population of potential solutions before checking to see how well each solution works and assigning a score. Solutions are then reproduced and combined and/or changed to create even better solutions. This process repeats a select number of times until a sufficiently good solution is reached. Genetic algorithms can be considered supervised or unsupervised learning, depending on the specific application.

Styles of Machine Learning

The field of machine learning is a new one, and scholars debate how best to think about it. Generally speaking, there are three styles of machine learning: supervised, unsupervised and reinforcement. 

During supervised learning, a computer is given example inputs and desired outputs. The computer’s goal is to learn rules that map specific inputs to specific outputs. For example, an individual might use a data set of labelled handwritten characters to train a computer how to read handwriting correctly.

Unsupervised learning occurs when no labels are given to the inputs, so the computer must find structure in the inputs on its own. This is sometimes referred to as data mining, which is looking for previously unknown patterns in data. An example of this may take place when an online retailer uses unsupervised learning to predict future purchases and make relevant recommendations based on a customer’s past behavior.

The third machine learning style, reinforcement learning, involves computer interaction with a dynamic environment while attempting to accomplish certain tasks. Throughout this process, the computer is given feedback as if it is reward. It then continues trying to maximize the reward it receives. This style takes place in self-driving cars programmed to avoid obstacles. The car is rewarded for successfully navigating without errors, thus improving its approach to navigation.

Recommended Posts