Microsoft has been warned that failing to comply with a legally binding request for information (RFI) about its generative AI tools could result in a fine of up to 1% of the company’s global annual turnover under the EU’s Digital Services Act (DSA)
In March, the European Union requested information from Microsoft and several other tech titans regarding the systemic risks that generative AI tools could pose. The Commission stated on Friday that Microsoft had yet to furnish certain documents as requested.
The company is in danger of enforcement if it fails to provide the requested data by May 27, as mandated by the Commission. A 1% standalone punishment is the maximum allowable under the DSA, 6% of annual global revenue.
However, providing accurate, sufficient, or misleading information in response to a formal RFI carries a 1% fine. In the case of Microsoft, this could amount to a potential fine of several billion dollars, which disclosed revenues of $211.92 billion for the fiscal year that concluded on June 30, 2023.
The Commission oversees larger platforms’ systemic risk obligations as mandated by the Digital Services Act. This warning is accompanied by a range of potent enforcement options that could impose significantly more significant financial penalties on Microsoft than any reputational damage it may incur for neglecting to provide requested data.
The Commission stated that it is missing information regarding risks associated with the generative AI features of the search engine Bing; in particular, the regulator emphasized the image generation tool “Image Creator by Designer” and the AI assistant “Copilot in Bing.”
The EU stated that any threats the tools might pose to democratic discourse and electoral processes are of particular concern.
Microsoft must comply with the Commission’s request for the missing information by May 27 or face a 1% annual revenue fine. If the organization fails to submit the data within that timeframe, the Commission may additionally levy “periodic penalties” equal to 5% of its average daily revenue or annual global turnover.
In April 2023, the DSA classified Bing as a “huge online search engine” (VLOSE), subjecting it to additional responsibilities about mitigating systemic risks, such as disinformation.
The responsibility of the DSA to combat disinformation on larger platforms places generative AI technologies squarely in the spotlight. Prominent technology companies have been at the vanguard of integrating GenAI into their mainstream platforms, notwithstanding the apparent shortcomings, such as the propensity of large language models (LLMs) to generate false information and present it as fact.
It has also been demonstrated that image-generation tools powered by AI can generate potentially detrimental or racially biased content, such as deceptive deepfakes. Meanwhile, the European Union is gearing up for an election the following month, a focal point of attention in Brussels regarding political disinformation fueled by artificial intelligence.
The Commission stated in a press release, “The information request is based on the suspicion that Bing may have violated the DSA for risks associated with generative AI, including the automated manipulation of services that can mislead voters and the viral dissemination of deepfakes.”
“By the DSA, designated services, such as Bing, are obligated to conduct sufficient risk assessments and implement appropriate risk mitigation strategies (DSA Articles 34 and 35).” The Commission has identified generative AI as a potential risk in its guidelines regarding the integrity of electoral processes, specifically about the June elections to the European Parliament.