{"id":2394,"date":"2024-10-23T11:12:04","date_gmt":"2024-10-23T11:12:04","guid":{"rendered":"https:\/\/news.cybertechworld.co.in\/index.php\/2024\/10\/23\/researchers-reveal-deceptive-delight-method-to-jailbreak-ai-models\/"},"modified":"2024-10-23T11:12:04","modified_gmt":"2024-10-23T11:12:04","slug":"researchers-reveal-deceptive-delight-method-to-jailbreak-ai-models","status":"publish","type":"post","link":"https:\/\/news.cybertechworld.co.in\/index.php\/2024\/10\/23\/researchers-reveal-deceptive-delight-method-to-jailbreak-ai-models\/","title":{"rendered":"Researchers Reveal &#8216;Deceptive Delight&#8217; Method to Jailbreak AI Models"},"content":{"rendered":"<p>\u200bCybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones.<br \/>\nThe approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple and effective, achieving an average\u00a0Cybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones.<br \/>\nThe approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple and effective, achieving an average\u00a0\u00a0The Hacker News<\/p>","protected":false},"excerpt":{"rendered":"<p>\u200bCybersecurity researchers have shed light on a new adversarial technique that could be used to jailbreak large language models (LLMs) during the course of an interactive conversation by sneaking in an undesirable instruction between benign ones. The approach has been codenamed Deceptive Delight by Palo Alto Networks Unit 42, which described it as both simple&hellip;&nbsp;<a href=\"https:\/\/news.cybertechworld.co.in\/index.php\/2024\/10\/23\/researchers-reveal-deceptive-delight-method-to-jailbreak-ai-models\/\" class=\"\" rel=\"bookmark\">Read More &raquo;<span class=\"screen-reader-text\">Researchers Reveal &#8216;Deceptive Delight&#8217; Method to Jailbreak AI Models<\/span><\/a><\/p>\n","protected":false},"author":0,"featured_media":2395,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"neve_meta_sidebar":"","neve_meta_container":"","neve_meta_enable_content_width":"","neve_meta_content_width":0,"neve_meta_title_alignment":"","neve_meta_author_avatar":"","neve_post_elements_order":"","neve_meta_disable_header":"","neve_meta_disable_footer":"","neve_meta_disable_title":"","_themeisle_gutenberg_block_has_review":false,"footnotes":""},"categories":[1],"tags":[],"_links":{"self":[{"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/posts\/2394"}],"collection":[{"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/comments?post=2394"}],"version-history":[{"count":0,"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/posts\/2394\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/media\/2395"}],"wp:attachment":[{"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/media?parent=2394"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/categories?post=2394"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/news.cybertechworld.co.in\/index.php\/wp-json\/wp\/v2\/tags?post=2394"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}