Guest Post

Why Should The International Agency For AI be Established in Vienna, Marianna Bonechi?

Marianna Bonechi. © M. Bonechi
Marianna Bonechi. © M. Bonechi
Startup Interviewer: Gib uns dein erstes AI Interview Startup Interviewer: Gib uns dein erstes AI Interview

Marianna Bonechi lives in Silicon Valley since seven years and is running Avy Ventures in Menlo Park – a company focusing on business consulting and pre-seed and seed investment. She worked for i5growth in the Bay Area and is an advisor to Open Austria in San Francisco. Marianna studied in France and Eisenstadt, Austria, and today is advocating the idea of an International Agency for AI. In this guest post, she explains why Vienna should be the place where this „IAAI“ should be established.

Big Tech + Pentagon = Love

Over the last few years, Silicon Valley has been obsessed with AI. The discussion around AI regulation has evolved from a negative view to strong proponents of the regulation by the top tech executives. Internet companies bidding for military contracts substantially increased the urgency of the matter. As Silicon Valley firms‘ revenues from usual sources plateau, the market requires them to seek new sources of income. Selling cloud and AI solutions to the Pentagon allows to tap into additional revenue. Microsoft and Amazon went head-to-head on a USD 10bn cloud infrastructure Pentagon deal that the former won. Google, amid employees‘ pressure, backed off from Project Maven and didn’t participate in recent cloud infrastructure bidding.

The Mountain View, CA company, however, is eager to do more business with the US Dept of Defence’s Joint AI Center, provided such cooperation is in line with the company’s AI principles. As the gap between big tech and military narrows, some argue that AI technology is no different than any other technology. I disagree with this argument. Contrary to other war technology, AI isn’t developed by the brain of a physicist in New Mexico, but it’s built with and through data collected by hundreds of millions around the world.

Would a family sharing pictures with friends be comfortable knowing those same pictures were used to decide whom a drone should assassinate? Will the user be notified and asked for consent? „Hey, we would like to use your Thanksgiving family photo to train our systems to kill people?“ How can it be ensured that data was properly collected, managed and stored? Who checks whether data sets were obtained from minors?

While this powerful technology promises to speed up the process of drug discovery, cure diseases, revolutionize every industrial sector, at the same time it jeopardizes the basic democratic values around privacy and has no regulatory framework so far for any unintended negative consequences. Who will bear the responsibility for an AI solution that is trained on users‘ data and was used for drone targeting of a terrorist but also killed civilians? Beyond a set of principles and guidance published by big tech, as of today, there is no global regulatory framework that addresses these issues.

AI and new modes of warfare

Canada and numerous European countries debate and publish AI national strategies, while China became a world leader in AI, Russia deployed unmanned ground vehicles in Syria and the US dramatically increased AI spending by the military. In 2020 alone, the Defense Department budgeted USD 4bn for AI and ML research activities. Over USD 200 MM will be allocated to Joint AI Center, chaired by Silicon Valley veteran Eric Schmidt.

The race is on for military aerial supremacy based on unmanned aerial vehicles (UAVs). The United States has 10,000, the largest fleet, the Russians have more than 2,100 UAVs. China already leads the drone market and has an ambitious plan for the purchase and development of UAVs.

US military spends USD 12bn per year on the development of autonomous technologies suggesting a future where robots will be given the authority to pull the trigger.

Numerous countries have signed a UN charter on banning autonomous weapons, the US, China, and Russia haven’t signed it. In peace-loving democratic countries we can expect boundaries on the usage of AI in combat, but what about authoritarian regimes?

Lack of Global Cooperation

The challenge with AI is global, yet the approach of regulating it is limited and quite fragmented. Unlike nuclear power that is run by Governments, AI is in the hands of multinational companies. Even the best attempts by the industry to self-regulate has shortcomings. Google has recently published Cloud Model Cards to provide documentation on the machine learning models made available to the public. While this is a great first step, much more has to be done towards shaping global standards for trustworthy AI. The model cards document the ML model and explain its limitations but don’t prevent misuse of it. Compare this to driving a car. The car manufacturer sells you a car but doesn’t have to certify whether you are able to drive. A third party governmental agency has the sole authority certifying that someone is able to drive. One can imagine a similar process with ML models, where a third party is given the power to certify whether this model is used appropriately.

Elon Musk, Sam Altman and Yuval Harari were among the first calling for a global regulatory framework. Most recently Alphabet’s and Google’s CEO Sundar Pichai called for AI to be regulated. The vacuum gets filled by numerous non-profit and research organizations trying to come up with rules and regulations, however, most of them tend to involve only players with shared Western values. ​As ​China is expected to be the world leader in AI, it has to be an integral part of this global dialogue. Russia, with its focus on military AI, is also left out from the global discussion.

The idea of implementing the UN Human Rights framework would be a good starting point for bringing every nation to the table, it is however not enough, given the veto powers of permanent members China, Russia, and the US.

Where is Europe in the AI revolution? This was a question I asked Kai-Fu Lee after his talk at Stanford, in 2019. The answer was short „nowhere“. While we have great scientists and breakthroughs, they are rare and sparse in niche uses. Europeans are not investing and pushing the AI space the way the other major powers do. AI needs lots of computers, data and software developers. Something radical is needed for top PhDs to stop working for Silicon Valley. Europe is missing out on the new wave of wealth creation.

What would be an Austrian edge in global AI leadership?

Vienna + Diplomacy 2.0

My strong belief is that Vienna, as a historic center for diplomacy for the last two hundred years, and Austria, as EU country yet politically neutral, is uniquely positioned on taking a leadership role in helping to shape the global AI regulatory framework. It’s time for Vienna diplomacy 2.0. Austria should pro-actively seek a leadership role as a mediator of global AI dialogue by establishing the International Agency for AI in Vienna inviting global tech corporations, researches, and governments to join.

It’s time to move from discussing moral principles to establish an actionable regulatory framework to be continuously improved and adjusted over time. The agency should encourage the development of trustworthy AI, set global AI standards, encourage AI standards implementation and provide safeguards amid the misuse of AI technology.

The first step would be to invite world leaders to a scientific conference in Vienna, centered around a specific AI-related ethical issue chosen from topics ranging from autonomous weapons to privacy and third-party data sharing, surveillance.

Werbung
Werbung

Specials unserer Partner

Die besten Artikel in unserem Netzwerk

Powered by Dieser Preis-Ticker beinhaltet Affiliate-Links zu Bitpanda.

Deep Dives

© Wiener Börse

IPO Spotlight

powered by Wiener Börse

Austrian Startup Investment Tracker

Die Finanzierungsrunden 2024

#glaubandich-Challenge 2024

Der größte Startup-Wettbewerb Österreichs mit Top VC-Unterstützung

Trending Topics Tech Talk

Der Podcast mit smarten Köpfen für smarte Köpfe

2 Minuten 2 Millionen | Staffel 11

Die Startups - die Investoren - die Deals - die Hintergründe

The Top 101

Die besten Startups & Scale-ups Österreichs im großen Voting

BOLD Community

Podcast-Gespräche mit den BOLD Minds

IPO Success Stories

Der Weg an die Wiener Börse

Weiterlesen