American workers whose careers have been upended by automation in recent decades are vastly less educated, especially men who work in manufacturing.
But a new kind of automation — artificial intelligence systems called large language models, like ChatGPT and Google’s Bard — is changing that. These tools can quickly process and aggregate information and generate new content. The jobs most vulnerable to automation now are clerical jobs, those that require more cognitive skills, creativity and higher levels of education. Affected workers are likely as well Highly paidIt is slightly more likely to be a woman a variety of research has been found.
“This surprised most people, myself included,” said Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered Artificial Intelligence. who was predicting Creativity and technical skills would turn people off Automation effects. “To be very honest, we had a hierarchy of things that technology could do, and we felt comfortable saying things like creative work, professional work, and emotional intelligence, that it would be hard for machines to do at all. Now that’s all turned on its head.” ».
A group of new research analyzes the tasks of American workers, using the Department of Labor O*Net DatabaseAnd I assumed which of these big language paradigms can do. It has been found that these models can be of great help with tasks in one-fifth to a quarter of occupations. In most functions, models can perform some tasks, find analytics, including from Pew Research Center And Goldman Sachs.
For now, the models still sometimes produce incorrect information, and are more likely to help workers than replace them, say Pamela Mishkin and Tina Elondo, researchers at OpenAI, the company and research lab behind ChatGPT. They did Similar studyIt analyzed 19,265 tasks performed in 923 occupations, and found that large language models could do some of the tasks that 80% of American workers do.
However, they have also found reason why some workers fear that large language models will replace them, according to Sam Altman, CEO of OpenAI, The Atlantic said Last month: “The jobs are definitely going to disappear, a dead end.”
The researchers asked an advanced ChatGPT model to analyze O*Net data and identify the tasks that large language models can perform. It found that 86 jobs were fully exposed (which means every task the tool can help with). Human researchers said 15 jobs. The job that both humans and AI agreed was most at risk was mathematician.
The analysis found that only 4% of jobs do not contain any tasks that technology could help with. Among them were athletes, dishwashers, and those assisting carpenters, roofers, or painters. However, even merchants can use AI in parts of their jobs such as scheduling, customer service and route optimization, said Mike Bidwell, CEO of Nepurly, a home services company.
While OpenAI has a commercial interest in promoting its technology as a boon to workers, other researchers said there were still unique human abilities that were (yet) not capable of automation — such as social skills, teamwork, care work and tradesman skills. “We won’t run out of things humans can do any time soon,” Mr. Brynjolfsson said. “But things are different: learning how to ask the right questions, real interaction with people, and physical work that requires dexterity.”
For now, large language models are likely to help many workers be more productive in their current jobs, the researchers say, in the same way as giving office workers, even entry-level ones, a chief of staff or a research assistant (although that may Indicates a problem for humans). assistants).
Take the write code: Study copilot on githuban artificial intelligence program that helps programmers by suggesting code and functions, found that those using it were 56 percent faster than those doing the same task without it.
“There is a misconception that exposure is necessarily bad,” said Ms. Mishkin. After reading descriptions of each profession in the study, she and her colleagues learned “an important lesson,” she said. “It’s impossible for a model to ever do all of this.”
Big language models can help write legislation, for example, but they can’t pass laws. They can act as therapists — people can share their thoughts, and models can respond with thoughts built on proven systems — but they don’t have human empathy or the ability to read subtle situations.
The public version of ChatGPT carries risks for workers – it often gets things wrong, can reflect human biases, and isn’t secure enough for companies to trust it with confidential information. The companies that use it get around these hurdles with tools that exploit their technology in what’s called a closed domain, which means they only train the model on specific content and keep any input private.
Morgan Stanley uses a version of the OpenAI model designed for their business that’s been fed about 100,000 internal documents, that’s more than a million pages. Financial advisors use it to help them find information to quickly answer clients’ questions, such as whether to invest in a particular company. (Previously, this required finding and reading multiple reports.)
Jeff McMillan, who leads data analytics and wealth management at the firm, said it gives advisors more time to talk to clients. The tool does not know about individual clients or any human touch that might be necessary, such as if they are going through a divorce or an illness.
Staffing firm Aquent Talent uses a commercial version of Bard. Normally, people read resumes and portfolios of employees to find the right job for them; The tool can do this more efficiently. Rohshan Bela, president of Aquent Talent, said her work still requires human scrutiny, especially in hiring, because human biases are ingrained.
Harvey, which is funded by OpenAI, is a startup that sells a tool like this to law firms. Senior partners use it for strategy, such as asking 10 questions to ask in a filing or summarizing how the firm negotiates similar agreements.
“It’s not about saying, ‘This is the advice I would give a client,'” said Winston Weinberg, co-founder of Harvey. “It’s ‘How do I quickly filter this information so I can get to the level of advice?’ I still need a decision maker.”
He says it’s especially useful for paralegals or partners. They use it to learn, asking questions like: What is the purpose of this type of contract, and why is it written this way? – or to write first drafts, such as summarizing a financial statement.
“Now they suddenly have an assistant,” he said. “People will be able to do higher-level work faster in their careers.”
Other people studying how workplaces use large language models have found it similar pattern: They help junior employees the most. study Customer support agents found by Professor Brynjolfsson and colleagues that the use of AI increased productivity by 14 percent overall, and 35 percent for lower-skilled workers, who moved up the learning curve faster with its help.
Robert Simans of New York University’s Stern School of Business, who said: Co-write a paper Finding that the occupations most exposed to large language models were telemarketers and some teachers.
The latest round of automation, affecting manufacturing jobs, Increased income inequality Research has shown this by depriving workers without a college education of high-paying jobs.
some scholars He says big language models can do the opposite, reducing inequality between higher-paid workers and everyone else.
“I really hope that this allows people with less formal education to do more things by lowering the barriers to accessing more elite, well-paying jobs,” said David Autor, a labor economics expert at MIT.