In the Faculty Viewpoints series, Tufts Gordon Institute faculty share thoughts on the latest news and trends in leadership, business, technology, and entrepreneurship.
Artificial intelligence is undoubtedly going to have significant impacts on our society in the coming future. Everything in our daily lives could be affected, especially our professional working lives.
‘AI Radically Disrupts Work: Myth or Reality?’ was the subject of a panel discussion at Tufts Gordon Institute’s Second Annual Career & Networking Night in November. It featured Kaiser Fung, Lecturer, Applied Data Science at Tufts Gordon Institute and Founder at Principal Analytics Prep, Partha Ghosh, Professor of Practice at Tufts Gordon Institute, and Matthias Scheutz, Professor of Cognitive Science and Computer Science, Tufts Human-Robot Interaction Laboratory Department. The panel was moderated by Allison Perkel, Senior Director of Engineering, Carbon Black.
Below are highlights captured during the conversation.
Allison: We’ve all heard that AI will change the future of work. But where are we today? What are the capabilities, and what are the limitations?
Matthias: Let me start by saying that throughout AI’s history from the 1950s on, people have made predictions about it and what it can solve. And they’ve almost always been wrong.
AI is much broader than most people think. These algorithms are very powerful, but they don’t always work the way humans do. For example, we can take a machine learning algorithm and train it to recognize traffic signs at different angles – but, if we add a smiley face sticker, it will no longer be recognized.
That is something we have to understand. This common sense is the biggest shortcoming of current AI technology. Machines are not aware of what they’re doing – they don’t think. They cannot use knowledge the way humans can.
Allison: Should we buy the hype? What are the myths and realities around how AI is changing our lives?
Kaiser: There are different ways to think about how AI has transformed our lives. An example of something that has transformed my own life is the GPS route optimizer Waze, which uses aspects of AI. That’s a fully realized success story.
There are some things that have had limited success – for example, Google Translate. It is really great at solving a limited problem, if you are someone with very little knowledge of the target language.
Then there are the things that are hyped – a lot of these ideas are aspirational, such as self-driving cars. Like Mattias said, we don’t have machines yet that can learn like humans.
Allison: When we think about the prevalence of AI, what socioeconomic factors should we consider?
Partha: Let’s take a historic view. Think about AI as a way of adjusting the structure of knowledge, which has been happening since the Nile Valley civilization, and the civilizations in the Indus Valley and the Yangtze River. The human’s desire to restructure and archive knowledge has always been a driving force for innovation.
We should ask ourselves: How will AI help natural intelligence? Consider the brain power that gets opened up by AI – could it help to improve the speed of innovation? How could this innovation help when it comes to healthcare, the divide between rich and poor, the worldwide lack of natural resources, or even climate change?
Another factor to consider is as business and industries change: how do we make sure that people who consume AI products don’t get consumed by AI themselves? How can our education system change so that we reduce the divide between people who live in an AI-embedded world versus people in an AI-starved world?
Someday we will even have awakened intelligence, where we can create a new kind of social conscience for how to better connect with the planet and with the universe.
Allison: How do you see the future of work being transformed for not just folks in technical fields, but for folks all over the world?
Partha: I feel that AI is significantly more powerful than the power of the steam engine, or the electricity unlocked by Thomas Edison. I like to believe that it will give birth to a series of industries which we cannot even imagine today.
Matthias: That’s true. It’s hard to imagine what a technology that we don’t have yet looks like. Cell phones are a good example. When they appeared, nobody predicted that within a few years all that e-commerce would take off and it would fundamentally shift how people shop.
I would argue that no inventor had an idea how their technology would get used in the future.
Allison: How should we think about the dangers of AI? How can we deal with the dark side?
Matthias: There is a dystopian picture – that AI will eradicate us. It wouldn’t be like the movies. It could be that we have no control over the AI system. For example, a city’s power grid could be shut down with no way of turning it back on. There is actually a whole research sub-community looking at how we can turn things off – how do we incentivize AI to listen to us?
We are going to have systems so complicated that we don’t understand them. Machine learning has the greatest potential and the greatest danger for that reason. We want these systems to be aligned with our human values. We want them to do what we want them to do, not what they think they should be doing.
Kaiser: One solution that doesn’t get talked about enough in the technology space is the idea that people developing these technologies should be required to take accountability for their products. A doctor can get sued for harming a patient. So, if an engineering product harms someone, then I think that conversation has to take place. We could think about legislation, and also economic incentives.
Allison: What last pieces of advice would you give to anyone interested in AI?
Matthias: AI has enormous potential to do good in the world. Notice that AI technology doesn’t present a technological hurdle anymore. Anybody can get their hands on code. It’s different from biochemistry, and it’s different from nuclear technology, where there’s a hurdle and a person on the street cannot use it.
We want to make sure that regardless of where it develops, it will be safe and beneficial. We will also have to think about areas where we don’t want AI to work. Maybe you don’t want it in elder care. Maybe you do. What about the lethal autonomy of robots – even if the militaries would all love it around the world?
The key advice I have is to raise awareness. Talk to colleagues in your companies, and talk to the decision-makers. My hope is that by raising awareness, there will be a discourse now that is broad enough so that we can figure out where we go in the future.