Microsoft’s Copilot Starts Reducing Reliance on OpenAI’s Language Models

Microsoft's Copilot Starts Reducing Reliance on OpenAI's Language Models

Microsoft’s Copilot Starts Reducing Reliance on OpenAI’s Language Models


blog entry, where the organization showcased its abilities. MAI-Voice-1 is presently accessible in Copilot and Copilot Labs, with Microsoft stating that it can produce audio lasting up to 60 seconds in under one second, utilizing just a single GPU. Microsoft asserts that this “rapid” capability categorizes it as “one of the most effective speech systems on the market today.”

MAI-1-preview, conversely, is aimed at delivering consumer-grade advantages by following directives and “offering useful answers to daily inquiries.” Currently, it is exclusively available in LMArena, although Microsoft intends to ultimately extend it to certain text-focused scenarios in Copilot as the weeks advance. For the time being, however, the company seeks to enhance it by gathering user input.

A long way from supplanting OpenAI