Microsoft launches its tiniest artificial intelligence model to date

Artificial intelligence model - Microsoft launch

Phi-3 is the smallest so far and learned from “bedtime stories” from other large language models

Microsoft has released the latest version of its tiny artificial intelligence model, the Phi-3 Mini, which is the smallest it has launched to date.

This is the first of three small models it intends to launch

The Phi-3 Mini is only 3.8 billion parameters. Its training was based on data notably smaller than those for GPT-4 and other large language models (LLMs). The new Phi-3 Mini artificial intelligence model can be found on Hugging Face, Azure, and Ollama.  That said, there are two other small options that are also on the way.

Artificial intelligence model - Microsoft Logo on Building
Credit: Photo by depositphotos.com

Microsoft has also stated that it plans to release a Phi-3 Small, which will be 7 billion parameters, and a Phi-3 Medium, which will be 14 billion parameters. 

Note: the parameters used to measure these AI models refer to the number of complex instructions they are able to understand.

Microsoft launched the Phi-2 artificial intelligence model in December 2023

That version offered performance that was comparable to Llama 2 and other considerably larger models. 

According to Microsoft, the Phi-3 offers even better performance than Phi-2 and is able to provide responses that near those of a model 10 times larger.

According to Microsoft Azure AI Platform corporate vice president Eric Boyd, the Phi-3 Mini is just as capable as GPT-3.5 and other similarly sized LLMs, though “just in a smaller form factor.”

Smaller artificial intelligence models have certain advantages over their larger counterparts. For instance, they are typically cheaper to operate, and they offer improved performance on personal devices such as laptops and smartphones.

Microsoft has been assembling a team with a focus that is aimed specifically at AI models that are smaller. Aside from Phi, it has also created the Orca-Math, which is an artificial intelligence model that is geared toward math problem solving.

Microsoft isn’t the only player focusing on smaller options. That said, most are aimed at tasks with less complexity, such as coding assistance or document summarization.  Gemma 2B and 7B from Google are basic chatbots that can also assist with language-related tasks. The Claude 3 Haiku from Anthropic is able to read dense research papers that also include visuals such as graphs and provide rapid summarization.  Meta’s Llama 3 8B is also usable for coding assistance and as a chatbot.

Leave a Comment


This site uses Akismet to reduce spam. Learn how your comment data is processed.