{"id":1124,"date":"2023-12-13T17:18:07","date_gmt":"2023-12-13T17:18:07","guid":{"rendered":"https:\/\/hello.inherentknowledge.org\/2024\/2023\/12\/13\/microsoft-unveils-phi-2-the-next-of-its-smaller-more-nimble-genai-models\/"},"modified":"2023-12-13T17:18:07","modified_gmt":"2023-12-13T17:18:07","slug":"microsoft-unveils-phi-2-the-next-of-its-smaller-more-nimble-genai-models","status":"publish","type":"post","link":"https:\/\/hello.inherentknowledge.org\/2024\/2023\/12\/13\/microsoft-unveils-phi-2-the-next-of-its-smaller-more-nimble-genai-models\/","title":{"rendered":"Microsoft unveils Phi-2, the next of its smaller, more nimble genAI models"},"content":{"rendered":"<p>Microsoft has announced <a href=\"https:\/\/www.microsoft.com\/en-us\/research\/blog\/phi-2-the-surprising-power-of-small-language-models\/\" target=\"_blank\" rel=\"noopener\">the next of its suite of smaller, more nimble artificial intelligence (AI) models<\/a> targeted at more specific use cases.<\/p>\n<p>Earlier this month, Microsoft unveiled\u00a0<a href=\"https:\/\/huggingface.co\/microsoft\/phi-1\" target=\"_blank\" rel=\"noopener\">Phi-1<\/a>, the first of what it calls small language models (SLMs); they have far fewer parameters than their large language model (LLM) predecessor. For example, the GPT-3 LLM \u2014 the basis for ChatGPT \u2014 has 175 billion parameters. GPT-4, OpenAI\u2019s latest LLM, has about 1.7 trillion parameters. Phi-1 was followed by <a href=\"https:\/\/huggingface.co\/microsoft\/phi-1_5\" target=\"_blank\" rel=\"noopener\">Phi-1.5<\/a>, which by comparison, has 1.3 billion parameters.<\/p>\n<p class=\"jumpTag\"><a href=\"https:\/\/www.computerworld.com\/article\/3711701\/microsoft-unveils-phi-2-the-next-of-its-smaller-more-nimble-genai-models.html#jump\">To read this article in full, please click here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Microsoft has announced the next of its suite of smaller, more nimble artificial intelligence (AI) models targeted at more specific use cases. Earlier this month, Microsoft unveiled\u00a0Phi-1, the first of what it calls small language models (SLMs); they have far fewer parameters than their large language model (LLM) predecessor. For example, the GPT-3 LLM \u2014 [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1124","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts\/1124","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/comments?post=1124"}],"version-history":[{"count":0,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts\/1124\/revisions"}],"wp:attachment":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/media?parent=1124"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/categories?post=1124"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/tags?post=1124"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}