Is Microsoft moving too quickly to capitalize on artificial intelligence at the expense of long-term shareholder value? That was a key question at the company’s shareholder meeting Thursday morning.
Krist Novoselic, the musician and author who co-founded the iconic band Nirvana, served as the spokesperson for a shareholder initiative asking the company to carefully study and report on the impact of its AI initiatives.
In a video played during the virtual meeting, Novoselic asserted that Microsoft has rushed its generative AI products to market, saying that the company should do more to consider and address AI’s risks.
He cited not only the potential impact on society and the world but also the long-term legal, regulatory, and reputational risks for the company.
“When Microsoft released its generative-AI-powered Bing last February, numerous AI experts and investors expressed concern. Many urged Microsoft to pause and consider all the risks associated with this new technology so that the company could establish risk mitigation practices,” Novoselic said. “Yet our company raced forward, releasing this nascent technology without the appropriate guardrails.”
Describing himself as a long-term Microsoft shareholder, he added, “Generative AI is a game-changer, there’s no question, but the rush to market seemingly prioritizes short term profits over long-term success.”
Addressing the issue in a Q&A session at the meeting, Microsoft President Brad Smith discussed the company’s focus on AI safety and responsible development. He cited Microsoft’s six principles for AI development, which are implemented through employee training, software development practices, and pre-release reviews.
“We basically have developed an entire AI safety architecture that goes with each of our applications,” Smith said. “At the same time, we recognize that in most markets, the public wants to know that it’s relying not only on companies that are responsible, but a legal framework that, frankly, ensures that every business adheres to some common standards.”
The proposal called on Microsoft to study the risks posed by misinformation and disinformation created and disseminated through artificial intelligence, and to issue a report on its plans to mitigate the risks.
Microsoft shareholders voted against the proposal, and eight other outside shareholder resolutions submitted for consideration at the meeting. The company said it will release detailed results within four days.
The proposal was led by Arjuna Capital, which previously backed a successful shareholder proposal that led Microsoft to commission and release a report on its sexual harassment and gender discrimination policies and practices.
Here are the full text of the proposal and Microsoft’s response.
Proposal 13: Report on AI Misinformation and Disinformation (Shareholder Proposal)
Arjuna Capital and a co-filer have advised us that they intend to submit the following proposal for consideration at the Annual Meeting.
Report on Misinformation and Disinformation
Whereas, There is widespread concern that generative Artificial Intelligence (AI) – as exemplified by Microsoft’s ChatGPT – may dramatically increase misinformation and disinformation globally, posing serious threats to democracy and democratic principles.
“I’m particularly worried that these models could be used for large-scale disinformation,” said Sam Altman, CEO of OpenAI, the company that developed ChatGPT along with Microsoft.1
Microsoft has reportedly invested over 13 billion dollars in OpenAI, and has integrated ChatGPT in its Bing search engine and other products. ChatGPT is reportedly the fastest-growing consumer application in history.2
The Washington Post said ChatGPT users “have posted numerous examples of the tools fumbling basic factual questions or even fabricating falsehoods, complete with realistic details and fake citations.”3 The Guardian reported that “ChatGPT is making up fake Guardian articles.”4 Microsoft itself states: “Bing will sometimes misrepresent the information it finds, and you may see responses that sound convincing but are incomplete, inaccurate, or inappropriate.”5 Tests by NewsGuard found ChatGPT technology could be the most powerful tool in widely spreading misinformation.6
Generative AI’s disinformation may pose serious risks to democracy by manipulating public opinion, undermining institutional trust, and swaying elections. In January, Eurasia Group ranked generative AI as the third highest political risk confronting the world, warning new technologies “will be a gift to autocrats bent on undermining democracy abroad and stifling dissent at home.”7 2024 will be a critical year for elections, with a United States presidential election and significant Senate and House races.8 Presidential elections will also be held in Russia and Ukraine.9
Shareholders are concerned that ChatGPT presents Microsoft with significant legal, financial and reputational risk. Many legal experts believe technology companies’ liability shield provided under Section 230 of the Communications Decency Act may not apply to content generated by ChatGPT. Senator Wyden, who wrote the law, says Section 230 “has nothing to do with protecting companies from the consequences of their own actions and products.”10 Experts are also debating how the principles of defamation law apply to AI-generated falsehoods, which open the company up to substantial litigation risk.11 ChatGPT is already running afoul of regulators, with current investigations by European and Canadian data protection authorities.12
In March, Microsoft eliminated its entire AI ethics and society team. Employees expressed concern that this leaves Microsoft without a dedicated team to ensure its AI principles are closely tied to product design.13
Resolved, Shareholders request the Board issue a report, at reasonable cost, omitting proprietary or legally privileged information, to be published within one year of the Annual Meeting and updated annually thereafter, assessing the risks to the Company’s operations and finances as well as risks to public welfare presented by the company’s role in facilitating misinformation and disinformation disseminated or generated via artificial intelligence, and what steps, if any, the company plans to remediate those harms, and the effectiveness of such efforts.
Board Recommendation
The Board of Directors recommends a vote AGAINST the proposal for the following reasons:
COMPANY STATEMENT IN OPPOSITION
We believe Microsoft’s multi-faceted program to address the risks of misinformation and disinformation is longstanding and effective. We are already engaged in multiple different types of public reporting on our efforts, including those required under the European Union’s Code of Practice on Disinformation and the Australian Code of Practice on Disinformation and Misinformation. In a further demonstration of our commitment to transparency, Microsoft committed to the United States Government in July 2023 that it would prepare a new annual transparency report on its AI governance practices which will cover our approach to mitigating the risk of AI generated misinformation and disinformation. As a result, the additional report requested by the proponent is unnecessary to inform shareholders of our approach to managing the risks of misinformation and disinformation, including those related to AI.
Our multi-faceted program to address the risks of misinformation and disinformation.
Microsoft has played a leading role in identifying and seeking to neutralize state-sponsored information influence campaigns targeting political parties in the United States, Ukraine, and democracies around the world. Our Democracy Forward Initiative works to protect campaigns from hacking, increase political advertising transparency, defend against disinformation, ensure a healthy information ecosystem, and uphold fair and secure electoral processes in democratic countries. See more at [link].
In July 2022, Microsoft completed the acquisition of Miburo Solutions, a cyber threat analysis and research company specializing in the detection of and response to foreign information influence. This has enabled Microsoft to expand its threat detection and analysis capabilities to shed light on the ways in which foreign actors use information operations in conjunction with cyber-attacks to achieve their objectives.
At the product level, we recognize the power of generative AI and have taken numerous steps to address information integrity risks as part of our foundational commitment to Responsible AI (see further details at [link]). On that website, there are product-specific public reports identifying disinformation and misinformation as potential risks and our efforts to address them, such as a detailed May 2023 white paper entitled “The New Bing: Our Approach to Responsible AI.” Microsoft has also committed that our generative AI image creation services, Bing Image Creator and Microsoft Designer, will use content provenance technologies to cryptographically attach content credentials to an image or video that record basic facts identifying the AI system used to create the image or video content, and whether it has been altered since creation. We believe such steps are core to maintaining a healthy information ecosystem and mitigating the risks of misinformation and disinformation.
Our existing and upcoming reports on our approach to mitigating the risk of misinformation and disinformation.
To demonstrate our commitment to transparency, we make available several public reports on our efforts to address misinformation and disinformation, such as the Microsoft Digital Defense Report available at [link]. We also publicly report the steps we have taken to meet our obligations as a signatory to the European Code of Practice on Disinformation. Our most recent report, filed in September 2023, specifically covered disinformation risks related to generative AI and is publicly available at [link]. (We filed similar reports under the Australian Code of Practice for Disinformation and Misinformation, the most recent version of which is available [link].) In July 2023, Microsoft committed to the United States Government that it would publish an annual transparency report on our AI governance practices that will include key information about our approach to mitigating the risks of misinformation and disinformation in Microsoft’s products and services, through our partnership with OpenAI, and through tools for the broader information ecosystem. Our first annual transparency report is scheduled for release before June 2024.
Our investments in responsible AI staff.
Finally, the references to the restructuring of Microsoft’s Ethics and Society team in this proposal may leave the impression that Microsoft has de-invested broadly in responsible AI, which is not the case. We have invested significantly in responsible AI over the years, with new engineering systems, research-led incubations, and, of course, people. As we’ve publicly shared in a May 2023 blog on our responsible AI progress (link) we now have nearly 350 people working on responsible AI, with just over a third of those (129 to be precise) dedicated to it full time; the remainder have responsible AI responsibilities as a core part of their jobs. Our community members have positions in policy, engineering, research, sales, and other core functions, touching all aspects of our business. This number has grown since we started our responsible AI efforts in 2017 and in line with our growing focus on AI.
Last year, we made two key changes to our responsible AI ecosystem: first, we made critical new investments in the team responsible for our Azure OpenAI Service, which includes cutting-edge technology like GPT-4; and second, we infused some of our user research and design teams with specialist expertise by moving former Ethics & Society team members into those teams. Following those changes, we made the hard decision to wind down the remainder of the Ethics & Society team, which affected seven people. No decision affecting our colleagues is easy, but it was one guided by our experience of the most effective organizational structures to ensure our responsible AI practices are adopted across the company.