{"id":1065,"date":"2023-08-21T11:12:25","date_gmt":"2023-08-21T11:12:25","guid":{"rendered":"https:\/\/hello.inherentknowledge.org\/2024\/2023\/08\/21\/why-and-how-to-create-corporate-genai-policies\/"},"modified":"2023-08-21T11:12:25","modified_gmt":"2023-08-21T11:12:25","slug":"why-and-how-to-create-corporate-genai-policies","status":"publish","type":"post","link":"https:\/\/hello.inherentknowledge.org\/2024\/2023\/08\/21\/why-and-how-to-create-corporate-genai-policies\/","title":{"rendered":"Why and how to create corporate genAI policies"},"content":{"rendered":"<p>As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators \u2014 not to mention the potential exposure of sensitive data.<\/p>\n<p>For example, in April, after Samsung\u2019s semiconductor division allowed engineers to use ChatGPT, workers using the platform leaked trade secrets on least three instances, according to\u00a0<a href=\"https:\/\/mashable.com\/article\/samsung-chatgpt-leak-details\" target=\"_blank\" rel=\"noopener\">published accounts<\/a>. One employee pasted confidential source code into the chat to check for errors, while another worker shared code with ChatGPT and \u201crequested code optimization.\u201d<\/p>\n<p class=\"jumpTag\"><a href=\"https:\/\/www.computerworld.com\/article\/3705028\/why-and-how-to-create-corporate-generative-ai-policies.html#jump\">To read this article in full, please click here<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>As a large number of companies continue to test and deploy generative artificial intelligence (genAI) tools, many are at risk of AI errors, malicious attacks, and running afoul of regulators \u2014 not to mention the potential exposure of sensitive data. For example, in April, after Samsung\u2019s semiconductor division allowed engineers to use ChatGPT, workers using [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_jetpack_memberships_contains_paid_content":false,"footnotes":""},"categories":[1],"tags":[],"class_list":["post-1065","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"_links":{"self":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts\/1065","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/comments?post=1065"}],"version-history":[{"count":0,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/posts\/1065\/revisions"}],"wp:attachment":[{"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/media?parent=1065"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/categories?post=1065"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/hello.inherentknowledge.org\/2024\/wp-json\/wp\/v2\/tags?post=1065"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}