The latest sensation to make headlines this past month is a new Artificial Intelligence (AI) program, Chat GPT, that can generate text in seconds. For those who don’t know, Entrepreneurs Herald describes the program as “specifically designed for generating human-like text in a conversational environment. It can be used to generate responses to user input in real time, making it an ideal tool for chatbots and virtual assistants.
When asked the prompt: “How would you describe yourself?” the ChatGPT program quickly replied:
“I am a machine learning model, specifically a type of deep learning model called a transformer, trained on a large dataset of text. I am able to understand and respond to natural language input in a human-like manner. I can be used for a variety of natural language processing tasks such as text generation, language translation and question answering. I graduated from 2021, and my training is based on a neural network architecture.
York University Philosophy Professor Regina Rini explains that “GPT is basically an interactive repository of patterns in human-authored text. Here’s how it works: it pulls in a huge database of things that real people have written online, and uses then that, plus smart math, to generate new text. It’s basically trying to predict what people would write, starting at whatever speed you give it.”
Essentially, users can type in a question or a task. Depending on the task length or difficulty, the program usually responds to a command within seconds. You can ask the program a variety of prompts, including: dinner recipes, historical questions, and cover letter writing.
As CNET notes, “the AI is trained on large volumes of data from the Internet written by humans, including conversations. But ChatGPT is not connected to the Internet, so it sometimes gives wrong answers and has limited knowledge.”
The CBC reports that the program comes from OpenAI, a San Francisco research and development firm. It is co-funded by Elon Musk, and boasts investors such as Peter Theil and Microsoft.
Despite the innovation, there is some concern that the program could cause a shift in the workforce — that it could replace jobs, even entire lines of work — especially those in the writing industry, as it becomes more sophisticated.
As the technology advances, companies can use ChatGPT or similar programs for content writing tasks such as articles, blogs, and social media posts.
Entrepreneurs Herald notes that it can be used for “a large amount of written material on a regular basis, such as news organizations, marketing agencies and e-commerce sites.”
“LLMs (Large Language Models) will shrink some industries, but also create new ones, like most technological changes. Think about how the web and Google changed traditional advertising: some old jobs disappeared, but there were also new jobs such as Search Engine Optimization (SEO) or social media influence,” says Professor Rini.
Additionally, ChatGPT’s impact could be in the virtual assistant and search engine industries. According to CNET, a program like ChatGPT can change customer service chatbots, virtual assistants like Siri, and search engines like Google.
Despite fears that this program could eliminate job lines, Professor Rini describes how it could create new jobs, such as LLM speed writing or text database curation. “The big unknown is how many old jobs will go and how many new ones will appear,” says Rini.
At first glance, the ChatGPT program sounds flawless, but it still has some issues.
For now, their website describes ChatGPT as a “free research preview,” and before entering the program, a notice also appears, saying that “the system may sometimes generate incorrect or misleading information and may produce offensive or biased content. It is not meant to give advice.”
In response to the question, “Describe some of your abilities, simply,” the program also says, “Please note that I am not always 100% accurate and my abilities may be limited by my training data and its quality.”
One of the main issues with Chat GPT comes from it responding with incorrect information or factual inaccuracies. According to Professor Rini, “it gets facts wrong all the time because it doesn’t really have any knowledge of the world, just patterns in its source texts (including fiction and misinformation). We need to build the human skill of fact-checking.”
ChatGPT and other LLMs have other limitations. In a recent report by Current Digest, they argue that ChatGPT and other LLMs are not advanced enough to replace human critical thinking skills and human creativity.
Only time will tell what shifts Chat GPT will bring us culturally and within the working world.