{"id":1161,"date":"2024-02-20T22:12:03","date_gmt":"2024-02-20T22:12:03","guid":{"rendered":"https:\/\/hello.inherentknowledge.org\/2024\/2024\/02\/20\/openais-sora-text-to-video-tools-impact-will-be-profound\/"},"modified":"2024-02-20T22:12:03","modified_gmt":"2024-02-20T22:12:03","slug":"openais-sora-text-to-video-tools-impact-will-be-profound","status":"publish","type":"post","link":"https:\/\/hello.inherentknowledge.org\/2024\/2024\/02\/20\/openais-sora-text-to-video-tools-impact-will-be-profound\/","title":{"rendered":"OpenAI\u2019s Sora text-to-video tool&#8217;s impact will be \u2018profound\u2019"},"content":{"rendered":"<p>OpenAI last week unveiled a new capability for its generative AI (genAI) platform that can use a text input to generate video \u2014 complete with life-like actors and other moving parts.<\/p>\n<p>The new genAI model, <a href=\"https:\/\/openai.com\/sora\" target=\"_blank\" rel=\"noopener\">called Sora<\/a>, has a text-to-video function that can create complex, realistic moving scenes with multiple characters, specific types of motion, and accurate details of the subject and background\u00a0&#8220;while maintaining visual quality and adherence to the user\u2019s prompt.&#8221;<\/p>\n<p>Sora understands not only what a user asks for in the prompt, but also how those things exist in the physical world.<\/p>\n<p class=\"jumpTag\"><a href=\"https:\/\/www.computerworld.com\/article\/3713001\/openais-sora-text-to-video-tools-impact-will-be-profound.html#jump\">To read this article in full, please click here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>OpenAI last week unveiled a new capability for its generative AI (genAI) platform that can use a text input to generate video \u2014 complete with life-like actors and other moving parts. The new genAI model, called Sora, has a text-to-video function that can create complex, realistic moving scenes with multiple characters, specific types of motion, [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1161","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts\/1161","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/comments?post=1161"}],"version-history":[{"count":0,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts\/1161\/revisions"}],"wp:attachment":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/media?parent=1161"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/categories?post=1161"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/tags?post=1161"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}