ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    86% of security pros worry about a phishing future where criminals are using Artificial Intelligence

    Scheduled Pinned Locked Moved IT Discussion
    3 Posts 3 Posters 562 Views
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • stusS
      stus Vendor
      last edited by

      alt text

      A new survey by Webroot shows that 86% of security professionals worry that AI and ML (machine learning) technology could be used against them. And they are right, because it will and probably is already happening right now with fake celebrity sex videos.

      The survey shows the US is an early adopter of AI for cyber security, with 87 percent of US professionals reporting their organizations are currently using AI as part of their security strategy.

      Three quarters of cyber security professionals in the US believe that, within the next three years, their company will not be able to safeguard digital assets without AI. Overall, 99 percent believe AI could improve their organization's cyber security.

      Respondents identified key uses for AI including time-critical threat detection tasks, such as identifying threats that would have otherwise been missed and reducing false positive rates.

      "There is no doubt about AI being the future of security as the sheer volume of threats is becoming very difficult to track by humans alone," says Hal Lonas, chief technology officer at Webroot. More detail at Webroot's Quarterly Threat Trends report.

      AI is a game changer for better or for worse

      This is the first time in history that AI has come up to the level predicted in Sci-Fi for decades. And some of the smartest people in the world are working on ways to tap AI’s immense power to do just that.

      And some bad guys are using it to create fake celebrity sex videos. Yes, you read that right.

      This is going to be the next wave of phishing emails that use social engineering to manipulate your users into opening an infected attachment.

      With help from a face swap algorithm of his own creation using widely-available parts like TensorFlow and Keras, Reddit user “Deepfakes” tapped easily accessible materials and open-source code that anyone with a working knowledge of machine learning could use to create serviceable fakes.

      "Deepfakes" has produced videos or GIFs of Gal Gadot (now deleted ), Maisie Williams, Taylor Swift, Aubrey Plaza, Emma Watson, and Scarlett Johansson, each with varying levels of success. None are going to fool the discerning watcher, but all are close enough to hint at a terrifying future.

      After training the algorithm — mostly with YouTube clips and results from Google Images — the AI goes to work arranging the pieces on the fly to create a convincing video with the preferred likeness. That could be a celebrity, a co-worker, or an ex. AI researcher Alex Champandard told Motherboard that any decent consumer-grade graphics card could produce these effects in hours. (THIS LINK IS NFSF!)

      So, picture this. (Or rather, don't picture this!)

      Your user gets a spear-phishing email based on their social media "likes and shares", inviting them to see a celebrity sex video with.. you guessed it, their favorite movie star! Take it one step further and your user will be able to order fake celeb sex videos with any two (or more) celebrities of their liking and get it delivered within 24 hours for 20 bucks.

      And a good chunk of these video downloads will come with additional malware like Trojans and Keyloggers that give the bad guys full pwnage. Yikes.

      All the more reason to educate your users within an inch of their lives with new-school security awareness training that sends them frequent simulated tests using phishing emails, the phone, and txt to their smartphone.

      We help you train your employees to better manage the urgent IT security problems of social engineering, spear-phishing and ransomware attacks. Take the first step now. Find out what percentage of your employees are Phish-prone with our new, improved free Phishing Security Test

      Get Your Free PST Now

      https://www.knowbe4.com/phishing-security-test-offer

      Warm regards,
      Stu Sjouwerman
      Founder and CEO
      KnowBe4, Inc.

      alt text

      1 Reply Last reply Reply Quote 3
      • JaredBuschJ
        JaredBusch
        last edited by

        As always, porn leads the way with early adoption technology.

        bigbearB 1 Reply Last reply Reply Quote 2
        • bigbearB
          bigbear @JaredBusch
          last edited by

          @jaredbusch said in 86% of security pros worry about a phishing future where criminals are using Artificial Intelligence:

          As always, porn leads the way with early adoption technology.

          Lol yeah I read through the article just to learn the premise of the pic...

          Notta

          1 Reply Last reply Reply Quote 2
          • 1 / 1
          • First post
            Last post