AI luminary Yoshua Bengio: “The AI power will be concentrated around 2 or 3 countries”
It was exactly one week ago when an open letter by the Future of Life Institute seeking an immediate development pause for Artificial Intelligence put the world in an uproar. Professor Yoshua Bengio was the first to sign followed by Elon Musk, Steve “The Woz” Wozniak and thousands of other leading AI experts. We met the AI luminary at the Mila – Quebec Artificial Intelligence Institute in Montreal, which was founded by Bengio in 1993 and since then grew to one of the leading AI institutions for more than 1,000 researchers specializing in machine learning.
Trending Topics: You were the first to sign the open letter calling for an immediate stop of any kind of training or development of AI systems for the period of six months. Why are you advocating for this moratorium?
Yoshua Bengio: First, I’d like to emphasize that there is not a lot of scientific novelty in the current crop of large language models and systems that can generate images and understand images. However, these technologies have reached a scale that is huge and that yields surprisingly good results to the point where we now have AI systems that essentially pass the Turing test. Meaning you can interact with the system and you might not be sure if it’s a human or a machine, which is a milestone in AI. These systems are still not as intelligent as us in many ways. They cannot reason in ways that humans can reason. They hallucinate all kinds of things. There are lots of scientific problems, but they have reached a level that’s already very powerful and can fool people.
And that’s where it gets dangerous for society. These kinds of systems could be designed to influence people in interactions where they think they’re interacting with other humans or even influence them to vote differently, which again endangers democracy. There’s already a lot of scary things going on that threaten democracy with social media. But now imagine you can scale this influence in a way that’s also personalised. One of my fears is how AI systems could be used for disinformation. The main purpose why I signed this letter is to raise the alarm that society needs to adapt quickly and to set up the right legislation as well as to rethink how our society is organised on a global level and not just inside each country. Many ask me the question “What if governments like China and Russia don’t go along with the rest of the world in terms of those regulations?” When we take nuclear power and nuclear weapons as an analogy, we can see that it is a delicate topic. We have to engage in international discussions leading to international treaties so that we can put the right guardrails to keep the global security and global well-being in check. In a nutshell, AI can be immensely useful, but at the same time immensely destructive, so we need to slow down in order for society to adapt properly.
Do you think that 6 months will be enough time to prepare for the changes that AI will bring to our world?
Of course not, but it will get things rolling. I don’t even think that six months is enough for coming up with new legislation, also considering the fact that the EU has been working on AI legislation for four years. Hopefully it will be faster in other jurisdictions, but right now there’s only 5 or 6 companies that have the power, the capital and the talent to handle such advanced technologies. Initially, it’s enough if these companies agree on a common pause in order to make space for the government to step in so that every player in the industry abides by the same rules. This way it can be avoided that companies exploit AI to achieve a competitive personal advantage in the market, possibly even harming society. So what’s the solution? The solution is to agree on a set of rules. That’s why we have societies. That’s why we have governments to create collective norms that are better than the jungle.
Do you think that governments and political stakeholders are ready to play along and agree with this open letter?
No. That’s exactly why it is important to have these discussions and to start working on these questions, legislations and regulations now. An Adaptation is not going to happen in six months. California set a good first example by passing a law stating that any kind of AI-generated content needs to be clearly marked as such. Of course, there will be cheaters, but it is a simple first regulation that can be easily implemented. However, we know that governments are very slow when it comes to new regulations, which is why we want to use this open letter to raise awareness for the necessity of taking action by both the government and society.
Would a moratorium of 6 months also affect your personal research at Mila Institute of Artificial Intelligence?
As an academic center, a pause will not impact us negatively, because we don’t have that kind of compute power. In fact, almost nobody does. Just a handful of companies are currently operating at such highly advanced levels and hence are the ones affected by the moratorium. We should not let that kind of power just in the hands of the private sector. We need governments to invest in the positive aspects of AI such as in applications, in health care, or in the environment. A system that predicts climate consequences or discovers better therapies is not easily used for disinformation. These are very different kinds of applications that are much less dangerous for society and in fact, can save us from future pandemics to help us deal with climate change. This is the kind of AI that society should invest in rather than tools that are going to enhance search engines, advertising and social media. We do need better health. We do need better education. We do need to fight climate change. But our society doesn’t prioritize these areas at the moment, because that’s not where the profit is.
Let’s talk about the good aspects of deep learning. Can you give examples of how AI influences the health tech sector?
There are already commercial AI-generated products that help detect cancer or that are used to improve the efficiency of the logistics of the health care system. Another example that I’m involved in is drug discovery. The pharmaceutical industry is now investing billions in AI because they realize that it can completely change the game. Right now, it costs about $1 billion and takes ten years to discover a new drug, which can be significantly accelerated with AI algorithms. In addition, AI can solve the problem of pathogen mutations causing antimicrobial resistance which again could lead to the next pandemic.
Don’t you think that an AI revolution of this kind can widen the gap between rich and low income countries? Which long-term risks do you see?
One of the risks of any powerful tool is that just by itself, it is likely to increase inequalities and concentration of power leading to monopolies. The AI power will be concentrated around 2 or 3 countries governed by capitalism, which harms democracy. The solution lies in governments spreading the wealth on an international scale. We need to make sure that developing countries and people with no technical education also benefit from the AI revolution. In the next decade, there will be job sectors that will become more efficient, demanding less human workforce. This negative effect on the job market can create very strong social disruptions calling for safety nets.
Speaking of tools, what is your personal opinion on Chat GPT-4? Do you recommend startups to see these AI models as platforms to build on?
First of all, the science behind a tool like Chat GPT is something that is well understood. What is really needed to replicate what Open AI has done is a lot of capital and a small team of ten very good people who have a clear understanding of the technology. Open AI doesn’t necessarily have to become a monopoly, but they certainly have a competitive advantage. It’s similar to what happened with search engines and social media – they eventually become encroached. That being said, I think there’s a lot of opportunities for startups to use pre-trained models that may be coming from these organizations. Nevertheless, that’s another reason why we need quick regulations, because if it’s suddenly 10.000 startups instead of only 4 to 5 institutions engaging in AI Research, there will be people who are willing to bet our well-being on their company’s growth. There’s already hundreds of startups that were built on GPT and this number is going to continue to grow.
Do you think that all AI models should be open-sourced?
I used to think that, but now it’s not so clear. The trained models should not be open source. In other words, there’s the code that is used to train those models, but training is very expensive, costing up to $100 million. As there are no international treaties, you don’t want to make that specific training available to too many people. I don’t have the answer. These are difficult questions, which is precisely why we need to slow down so that we can have many scholars, political scientists, economists, philosophers, social scientists and AI experts co-elaborate on the question of how to navigate such an AI revolution. Right now it’s happening a bit too fast making us lose control.
Don’t you think it’s too late to slow down?
Just as with climate change, it’s never too late to improve things and to reduce the damage. Even if things seem bad or hopeless, we have a duty to find ways to improve things. And even if it’s not guaranteed to work, we have to try and try again.
As a scientist and AI luminary, do you feel responsible for the current happenings? Looking back, are there things that you would have done differently?
What I would have done differently, is starting the social discussion about how we need to prepare for those AI-tools even earlier. The first time I got closely involved with the subject was in 2015, 2016 and 2017. In Montreal, we worked on the Montreal Declaration for the Responsible Development of AI as one of the first declarations in this area. What’s more important here is the collective awareness and setting the social norms to steer the boat in a better direction.
If you look at the future, do you believe in one big AI for the whole world or will every country have its own AI?
One lesson we can draw from political science and history is that concentration of power is not good. You don’t want to have one country, one company, one party, controlling everything. We need coalitions that are robust to that risk, which is why I think that it’d be good if countries got together. There is no need for a single system if power can be decentralized.
What does artificial intelligence teach us about human intelligence?
I’ve been working with neuroscientists and cognitive scientists for many decades and a lot of our research in deep learning was inspired by the human brain . We’ve reached a stage where it’s not just brain sciences influencing AI, but AI is starting to influence our own understanding of how brains work. For example, one of the areas I’m working on is the neuroscience of consciousness, which at first might seem very mysterious. Once we start trying to propose theories that we can test in computers to have some of the same features of human subjective experience and high level cognition, it becomes less mysterious and might help us to understand what it means to be a human. I think that there are a few principles, such as the laws of physics, that can explain all forms of intelligence be it human, animal or artificial intelligence. And so these different fields can really help each other to figure out what those principles are.
One sure thing is that AI as a tool will change our human intelligence just as the internet or the first iPhone did. This is why we have to be careful when we unleash these innovations into the world.