Originally published inĚýCarroll Capital, the print publication of the Carroll School of Management at Boston College. .
If youâve read the headlines about artificial intelligence, you might believe it will turn us all into horses. Automobiles, of course, changed horses from essential laborers to luxury purchases in just a few years. AI, doomsayers predict, will do something similar to us humans. Itâll take our jobs and leave us to fill niche roles.
Professors at Boston Collegeâs Carroll School of Management who study AI call predictions like that overblown. Yes, AI will revolutionize the workplace, and, yes, some kinds of jobs will disappear. The McKinsey Global Institute, for example, has estimated that activities accounting for 30 percent of hours currently worked in the United States could be automated by 2030. But Carroll School scholars argue that people who learn to use AI to increase their productivity could end up better off. As they see it, AI-adept folks will be able to work faster and smarter.
âI donât think our real concern right now is about overall job loss,â says Sam Ransbotham,Ěýa professor of business analytics. âWhatâs going to happen is youâre going to lose your job to someone whoâs better at using AI than you are, not to AI itself.â
How do you become an AI ace? Itâs doable for many people, says Ransbotham, whoâs also the host of the podcast Me, Myself, and AI.ĚýYou donât have to become an expert, just the most knowledgeable person in your office.
With curiosity and diligence, most anyone can learn enough to figure out how to apply AI on the job, he says. The way to start is with play. Go online and play around with ChatGPT, Open AIâs chatbot. Try, say, having it write first-draft emails or memos for you. (But fact-check anything you use: ChatGPT and other large language models can sometimes offer up âhallucinations,â information that sounds plausible but is false.)
âAI tools are accessible to the masses,â Ransbotham says. âThatâs an interesting change. Most people donât play with PythonĚýcode.â He uses AI to generate the background images for slides in his presentations. âFor me, images on slides fall into the good-enough category. I want my computer code to be awesome, but the images I use on slides can just be good enough.â
In speaking of âgood-enough slides,â Ransbotham was alluding to the peril of leaningĚýtoo heavily on AI: what he calls the ârace to mediocrity.... You can use an AI tool to get to mediocre quickly,â he explains. ChatGPT, for example, can give a draft of an email or memo in seconds. But its prose will be generic, lacking color and context, because ChatGPT âaveragesâ the prose it finds on the web. Stop there, and youâll end up with average prose.
Another way to tool up on AI is to read and listen. Plenty of established publications, like Wired and Ars Technica, as well as newer ones, like Substack newsletters by Charlie Guo and Tim Lee, cover AI. Ditto for podcasts like Ransbothamâs. As you explore, understand that, despite the hype, the technology does still have real limitations, says Sebastian Steffen, an assistant professor of business analytics. âI tell my students that ChatGPT is great for answering dumb questions,â he says. âFor factual questions, itâs quicker than Wikipedia.â
But AI canât make judgments, which is often what work entails. Your boss may ask you to help formulate strategy, allocate staff time and resources, or determine whether a worrisome financial indicator is a blip or the beginning of something bad. Facts can inform those decisions, but facts alone wonât make them.
Steffen cautions that it may take several decades before we really understand how to use AI and the best ways to incorporate it into our workplace routines. Thatâs typical of big technological rollouts. Even AIâs inventors may not see the future as clearly as they claim. âAlfred Nobel invented dynamite to use in mining, but other people wanted to use it for bombs,â he says. That troubled Nobel, a Swedish chemist, and was one of the reasons he funded the Nobel Prizes.
Even in an AI world, humans will still likely have plenty to do, says Mei Xue, associate professor of business analytics. âThink about doctorsâwe still need someone to touch theĚýpatientâs bellyâ to get subtle information that sensors miss, she says. Robots can move pallets in warehouses, but they havenât learned bedside manner. Xue says humans will likely continue to fill roles that require âtalking to clients, meeting with customers, reading their expressions, and making those personal connectionsâwe can gather subtle impressions that AI canât.â
AI canât tell whether the crinkles at the corner of someoneâs eyes are from a smile or a grimace. So soft skills will still be rewarded. Brushing up on those may pay off.
Even in humdrum workplace communications, like those endless emails and memos, there will likely be a continuing role for us humans, Xue says. âWhatâs unique with us humans is personality, originality, compassionâthe
emotional elements.â ChatGPT can generate jokes, but it canât know your coworkers or clients and what will resonate with them.
“I donât think our real concern right now is about overall job loss. Whatâs going to happen is youâre going to lose your job to someone whoâs better at using AI than you are, not to AI itself.”
Similarly, you can let AI write your cover letters for jobs or pitches to clients. But you might fail to stand out, Xue says. ChatGPT âis searching for whatâs available on the internet and putting together whatâs best based on probability,â she explains. âFor now, it canât provide originality."Ěý
Xue adds that one can find the need for a human touch, or voice,
in unexpected places. âThis weekend I was listening to some books
on an app in Chinese. I found they offered two types of audiobooksâone read by a real person and one by an AI voice. I didnât like the AI readings. They sounded fine but had a perfect voice. When you have a real person read, you feel the emotion and uniqueness.â
Teachable MomentĚý
The Carroll School gives professors three options for using AI as a tool.
By Lizzie McGinn
With the launch of ChatGPT in Fall 2022, many educators feared that AI would completely upend academic integrity, a concern that many Carroll School faculty initially shared. âAt first [the reaction was] âwe have to stop this menace,ââ says Jerry Potts, a lecturer in the Management and Organization Department. Still, a handful of professors started making a compelling case: AI wasnât going anywhereâinstead, the Carroll School would have to rethink how to use it academically. By the following fall, three new policy options had been presented: Professors could completely prohibit AI, allow free use with attribution, or adopt a hybrid of the two options.
Some faculty members, like Potts, have fully embraced AI as an educational tool. In his graduate-level corporate strategy class, one project tasks students with pitching a business plan for a food truck with only 30 minutes to prepare.ĚýPotts has found that while AI often helped with organizing the presentations, it was humans who came up with the most creative ideas overall. Bess Rouse, associate professor of management and organization and a Hillenbrand Family Faculty Fellow, opted for a hybrid AI approach and allows it only for specific class assignments. In one case, she instructed students to use ChatGPT in preparing for peer reviews, which minimized the awkwardness of critiquing other studentsâ work.
âThere is less concern that this will be the ruination of teaching,â says Ethan Sullivan, senior associate dean of the undergraduate program. âWeâve instead pivoted to how AI complements learning.â For his part, Potts is optimistic. He says that if professors stay on top of this technology and adapt their courses accordingly, âWe should be able to take critical thinking to another level.â