Skip links

Harnessing AI: How Cybersecurity Leaders Can Protect Their Business

Blog Post

Harnessing AI: How Cybersecurity Leaders Can Protect Their Business

An interview with Terence Jackson CISM, CDPSE, GRCP, CMMC-RP

Veronica Wolf

We sat down with Terence Jackson, Chief Security Advisor at Microsoft and self-proclaimed “CISO in recovery,” to discuss what the modern cybersecurity leader can do to harness AI and protect their businesses. In addition to this, we discuss how the modern CISO can understand concerns around AI and machine learning and how it’s essential to use a human-centric design as a way to improve security processes and reduce risk.

 

What are the key challenges cybersecurity leaders may encounter when implementing AI-powered security measures, and how can they overcome them?

One of the obvious ones we’ve been discussing is skills shortages in cybersecurity. There are now more than 750,000 open jobs in the United States. As the industry rapidly pivots into AI, that comes up, the lack of skilled AI individuals paired with a lack of cybersecurity people. It’s almost a perfect storm where we’ve engineered AI as the solution to make up for that skill gap. 

Another challenge we’re seeing ramp up is threat actors. They’re evolving and adapting their tactics, techniques, and procedures— to make use AI at scale. We were already up against threats coming at cloud velocity, now it’s cloud velocity with AI, and the only way we’re going to get through is by using AI to combat it.

 

How can AI improve threat detection and response capabilities within an organization’s cybersecurity framework?

Many companies, not just Microsoft, use AI to do repetitive tasks and analyze threats. But it’s the first time we can implement tools that allow conversations between an analyst and a tool to make more accurate decisions. 

We’ve seen use cases where you can pair a junior security analyst with a smart AI and guide them through an investigation. This allows them to interact with it, ask questions, look at the suggested steps, and do on-the-job training. That hasn’t been the paradigm before; now, some companies are going down to fully autonomous generative AI security analysts. But I think we’re a ways away from doing that at scale.

Now, we’re describing our AI solutions as a copilot, as it still needs the human pilot to really guide it and make those calculated decisions. So modernizing security operations is huge. Also, identifying new malware strains. Only so many people can reverse engineer malware.

 

What ethical considerations should cybersecurity leaders consider when implementing AI-based security measures?

The first one is making sure that there’s human oversight. That’s a requirement to ensure that AI systems operate within the proper parameters and are making ethical decisions. It’s needed to eliminate bias from the ground up. And that’s why I describe it as a copilot, because the human is still the pilot. Additionally, privacy concerns still need consideration when your collect and process personal data, especially for multinational global companies with a presence in, let’s say, the EU with GDPR.

 

What are some of the common pitfalls when it comes to data quality?

We live in a large world where a lot of opinions, a lot of facts, and output that isn’t always factual. So, it goes back to putting guardrails around that, which in the last six months, there have been great strides.

Also, some are putting so much attention into overprotecting, which isn’t bad—sometimes getting valid output is becoming a challenge—but that just means all these companies—whether it’s open AI, Microsoft, or Google—are all building and flying the plane at the same time. But we are learning quickly, especially on hot topics like religion, politics, etc. We’re going into an election year, so there’s a lot of attention around entities that will pay for media around advertising, misinformation, and disseminating AI-generated images or deep fakes. This is probably the first full election cycle where these tools are broadly available to anybody with the Internet connection and a credit card. So, it’ll be an interesting experiment.

We also have to look at the data used to train these models and the potential for bias based on the data that it is used when these models are trained. I’m frequently conversing about those types of concerns, responsible AI, and using it ethically.

 

What should cybersecurity leaders be thinking about when it comes to the legal ramifications of proprietary data used with free or paid AI apps?

Many creators are frustrated because they have items on the Internet that may or may not have been copyrighted or trademarked. And once you ask AI to generate a response based on a certain topic or to create artwork and the style of a particular artist, is it copyright infringement?, Is subject to trademark violations? We’re still in an exploratory phase around generated content, and it’s leading to a lot of questions. Should authors and writers divulge that the content was written by an AI? Should they pass it off as their own? Is it plagiarism?

There are many ethical and moral dilemmas we find ourselves in right now. But from the cyber security perspective, I would say it’ll focus on limiting the potential for inaccurate outputs based on prompts and the training data. Those are some top-of-mind issues for security leaders if they use these tools. But they want to trust that the outputs the tool gives their team won’t cause harm artificially.

 

How can cybersecurity leaders ensure the proper integration and interoperability of AI systems with existing cybersecurity infrastructure?

It starts with education for the executives. Many companies look at AI like, “We want to use it.” But what are the use cases around deploying? What problem are you looking to solve with it? And how can we walk into that responsibly and transparently? Consider the potential for bias and hallucinations, and think, “It’s going to be a marathon.” Put simply—identify use cases.

To do this, there are new roles popping up—ethical AI officers and prompt engineers. And you’ll need to build a team around the one building, deploying, using, and monitoring the solutions. But you also need a governance framework on auditing, privacy, and regulatory issues to ensure you stay within your parameters and boundaries.

 

How can cybersecurity leaders strike a balance between leveraging AI for automation and decision-making while maintaining human oversight and accountability in their security operations?

The human has to remain in the loop. That’s really the only way it works at scale. Automation is also one part, but you must define what use cases look like and what you’re comfortable with. Then, when something falls outside that range, the human must return to the decision-making process for that oversight and accountability.

Although you can, you shouldn’t fully automate certain decisions to any AI or system. Just think about if an alert comes in that the AI deems to be critical. And it was like, “Hey, I’m going to search the environment for every instance of vulnerability and disconnect it from the network.”

You know the implications of that happening, right? Think about if it’s a hospital; that’s probably not a good thing. It’s all about context and risk.

So, it goes back to having good risk frameworks and understanding that everything won’t be an immediate candidate for a generative AI solution.

 

In what ways can cybersecurity leaders collaborate with AI experts and data scientists to develop and deploy AI-powered security solutions effectively?

I say from the cradle to the grave. But it will be very symbiotic because cybersecurity leaders at large know about risk. You must partner with the experts—the data scientists, the AI experts—especially in large global organizations because the AI person and data scientists are building and operationalizing these systems.

Cyber leaders must set clear boundaries for risk frameworks, work, and partnership. It must be a real partnership between the development and AI teams to eliminate bias and prevent the unfair use and sharing of regulated data.

 

How else can cybersecurity leaders effectively leverage AI to enhance their organization’s overall security posture?

We just started thinking about the huge skills gap and the ability to leverage generative AI to meet the attackers at the point of attack. Dwell time over the years has gone from months and weeks to days and hours now.

“Our most recent data shows that the average dwell time is around 72 minutes from when an employee clicks on a malicious link to when data is being exfiltrated. Seventy-two minutes.”

We’ll have to leverage AI to ensure that, when an event happens, we can immediately detect, block, and remediate to prevent data from leaving the boundary. Because once it leaves, we start the single extortion, double extortion. I think we have triple extortion now once the data is out there.

The goal is to prevent data from leaving. At the speed of AI and machines, some of that will be automated to give the security operations team a fighting chance. I have this conversation with customers all the time about our solutions.

There are certain things you will definitely want to fully remediate to give your team time to conduct an investigation. If an email gets clicked and data immediately uploads to an unsanctioned application, you’ll probably want to block that. Just shut that down and investigate; ask for forgiveness, not permission—that’s kind of where we are. And that’s where we have to get with generative AI. It’s going to equip us to be better defenders.

 

Is there anything that you feel like has been an “Aha” moment since you’ve been delving into AI for cybersecurity?

It’s a bold new world, and the barrier to entry is has been greatly lowered. You don’t have to be super technical or technical at all. If you can speak a language and create a prompt, you can now share an exchange with a supercomputer. This democratizing of AI can help those who were once marginalized now generate an income and even solve real-world problems. Yeah, the game has definitely changed.

Join the Community