Artificial intelligence raises threat of more advanced impersonation attacks, professionals say

https://www.businessrecord.com/wp-content/uploads/2022/12/Sarah-Bogaards12-21-scaled-e1670257797918.jpg

Artificial intelligence applications in cybersecurity are another illustration of how the technology presents both risks and opportunities for organizations. It is advancing the strategies of bad actors as well as the ways cybersecurity professionals leverage the technology to prevent and defend against attacks.

Trevor Kems, an ethical hacker based in the Des Moines metro area, graduated from Iowa State University in 2021, about a year before ChatGPT came on the scene.

He said the explosion of ChatGPT and other large language models (LLMs) have divided the current era in cybersecurity into “pre-ChatGPT and post-ChatGPT.”

Pre-ChatGPT, use cases for AI in cybersecurity were more specific, he said. Now, LLMs have given rise to AI being applied more broadly, including helping professionals like him work more efficiently.

We spoke with Kems and other local cybersecurity professionals to gauge how AI is transforming challenges and opportunities in cybersecurity defense.

Their responses have been lightly edited for length and clarity.


Trevor Kems

Trevor Kems, ethical hacker

Trevor Kems is an ethical hacker, also known as a penetration or pen tester. His job is to simulate how bad actors would attack an organization in order to test companies’ cybersecurity protections and identify vulnerabilities. “We act like [the attackers], but we don’t actually go all the way,” Kems said. “We’re not stealing data. We’ll go up to the point of ‘Hey, I can steal the data’ and then stop.”

Like other technologies, AI is at the disposal of both cybersecurity defenders and attackers. As a cybersecurity professional, what opportunities and risks does this present?

On the good side, before ChatGPT if you had a question, you had to go find the information. It makes it a lot easier now with these large language models to just ask it a question and you’ll get pointed in roughly the right direction. The one big thing is everybody talks about error, but if it’s a difference between 90% of the way there or 1%, I’m going to take the 90% even if it’s 20% wrong. Because at the end of the day, it points me in the right direction. The risks come with the possible error. There’s also a privacy aspect as well. OpenAI says they train their models on your data. I think you can opt out of that, but the problem is what if the accounting department’s putting in all their financial data into these LLMs and then you have problems with sharing sensitive data, code. I was writing a server for the cyber defense competitions at Iowa State and I said, “Hey, write me this thing.” I just wanted to experiment because it’s not something serious. I went to go run it and it was wrong. Now, it was not that wrong, but at the end of the day, it could introduce vulnerabilities. It’s parroting what people have done and people are fallible.

What new threats or challenges are companies facing with AI being available to cyber attackers?

I think it’s lowering the bar to entry. Before it was you had to go read manual pages online, you had to learn how to use the tools properly, whereas nowadays, if it’s easier to train people to do it, that can really help. I don’t know if it’s directly tied to AI, but state-sponsored hacker groups and things like that have been really funded by ransomware campaigns and that’s been the real driver is instead of doing a job like data entry in one of those countries that hackers run rampant in why don’t you learn how to do use these tools and conduct ransomware campaigns. I think AI has helped that. I don’t know if there’s an exact data point that says there’s a direct correlation, but it’s just like if we didn’t have Google and Google came on the scene two years ago. What would happen? It would change the landscape of knowledge in general and then knowing how to hack.

Are there new threats rising from generative AI in particular?

One of the big challenges — it might not be happening just yet but it’s on the horizon — is custom text-to-speech and custom video. We saw the explosion of deepfakes online of political people and disinformation campaigns, but I think the big thing in the next generation of these systems is going to be you get a phone call that sounds like a family member or your boss and doesn’t necessarily sound out of place. We trust if you get a phone call and the caller ID says boss, I’m going to pick it up. If it sounds like your boss and they’re not making an unreasonable request, you’ll probably go do it. We’re seeing people sending gift cards out, but what if it’s not gift cards anymore, it’s account numbers or product or dollars in general. That’s something that we don’t really know how to stop other than like code words. We’re going to cross that bridge very shortly in my opinion, probably a year, maybe two. Instead of spam it’s going to be “Hey, it’s your mom. I’m locked in jail.” So how do you prevent that? How do you tell your employees about it if you’re in that situation? It’s going to be very complicated and I’m sure there’ll be something that prevents it.

What other conversations on AI in cybersecurity do you think are on the horizon?

If somebody makes an AI agent that’s really good at stopping malware in general, it’s going to be like the arms race of the Cold War. If I build the agent that’s good, there’s going to be attackers out there that are going to make their reverse engineering efforts better. It’s just a constant cycle of that. I think we’re going to start to get into that arms race. Also, there’s probably going to be something on the horizon of secure computing. Apple, for example, has come out with their private cloud AI system. We’re going to see stuff like that, where our devices locally don’t have enough compute power to do the AI challenges, so let’s off-load it, use that cloud infrastructure. I’m a big privacy advocate and part of that is how do you make sure things are private at the same time?



Heartland Business Systems: Jeff Franklin and Ben Hall

Heartland Business Systems (HBS) is an IT services provider headquartered in Wisconsin with locations in 10 states. HBS has an office in Des Moines, and in 2022 acquired cybersecurity firm Pratum. Jeff Franklin is a virtual chief information security officer, meaning he works directly with clients in a variety of industries to build out their cybersecurity plans and helps organizations maintain their plans over time. Ben Hall is one of HBS’ practice managers of risk, governance and compliance where he oversees the team that provides strategic cybersecurity services like information security consulting, risk assessments, IT audits and incident response review.

How have you seen the AI applications in cybersecurity change with the growing presence of large language models?

Ben Hall

Hall: I’ll start with a couple of the positives. With a lot of the security-based applications and tools, organizations are able to implement AI to help orchestrate vulnerabilities and findings that could exist within a network. So rather than having the human go in and create a specific rule set based on a particular vulnerability, AI can create that report specifically to that anomaly and apply that rule set to any asset within whatever tool you’re using. For example, we’re utilizing a lot of Microsoft products for our extended detection and response. AI has been able to grab some of that malicious activity, realize this deviates from the overall norm and flag that this is not normal activity for this organization. AI from a positive perspective has helped take away some of that human element of having to actually research based on something looking odd.

Franklin: From a threat perspective, automation and privacy and all of those things that are going to benefit an attacker are being addressed the same way from vendors. It’s hard to listen to a cyber security demo anymore without them talking about some level of integration with AI. While we know attackers are going to be leveraging this automation, on the reverse side, vendors are going to be leveraging this automation for defensive purposes as well. It’s still that cat and mouse game where the vendors are going to go back and forth with attackers.

Are there new cybersecurity risks that stem from the rise of new AI tools that companies should know?

Hall: There absolutely can be. With any type of new opportunities within the cybersecurity landscape, there’s always that option for those newer things to be undefined and could potentially cause exposure to your organization. With AI specifically, it relies on the overall accountability and transparency of that data. If you’re utilizing a lot of those tools, how reliant are you on the information that it’s putting in there? If you look at the social engineering aspect, we used to be able to say, “Hey, you’re going to want to look for certain spelling errors or grammar checks that could help that phishing email come across as malicious.” AI has been able to clean that up a little bit, so depending on the extent of the attack, it could review an individual’s email cadence and mimic an actual email that would come from that individual to make it a little less susceptible. There have been instances of AI passing that overall eyeball test for a lot of humans as you increase that sense of urgency while still giving you that confidence that it is a legitimate email when in fact it isn’t.

Franklin: I think the authentication piece is going to continually be more and more challenging to understand that it’s a legitimate email or legitimate phone call or voicemail. If our voice is out there, that can be mimicked easily with AI. Even though we’re at the beginning steps of those deepfakes, it’s only going to get better, and it’s only going to get more convincing. Using the tool to impersonate people in a number of different mechanisms is going to be more and more challenging for us just as consumers to recognize whether something is legitimate or not, so we need to be skeptical of what’s coming to us from the internet.

How are the effects of AI showing up in your work day to day?

Jeff Franklin

Franklin: I think what I’m seeing now is organizations are grappling with their own use of AI within their environment. They understand that there’ll be additional cybersecurity threats, but I think really their focus right now is how do we leverage AI to make our own processes better, to make our own company better, to give us a competitive advantage while not doing something. having an employee accidentally tie a critical database or confidential database to AI and have that become public information. What I’m seeing now is creating internal policies, internal guidelines and looking at ways they can improve their own products and services with AI. I think they’re less concerned right now with the new threats posed with AI. We certainly are advising of those threats, but we’ll continue to drive down that AI defensive strategy for them. 

Hall: A lot of things we’ll do with organizations is help them determine what that strategy is. Is that particular AI tool acceptable? Is it something they want to invest in and review? Probably running that through some type of due diligence to make sure it’s not going to leak any information on the back end that could potentially cause an issue. Almost all organizations in general are going to have some type of acceptable use policy — the do’s and don’ts of what each member of their team should or shouldn’t be doing. You want to start incorporating some of those AI elements into that acceptable use policy. What can they use AI for? How should they validate the information they’re getting from those approved tools? There’s just got to be a little bit of review to make sure that information is correct and accurate.

How do you think companies might approach mitigating risks of AI from a cybersecurity standpoint moving forward?

Franklin: Companies have to evaluate where their risk is. One of the biggest risks that we see is impersonation of people and email. We’re tricked and we click on things and we enter our usernames and passwords, so what is the defensive mechanism for that if that’s the biggest risk? That could be a number of technologies. I think you’re going to see even broader adoption of multi-factor authentication, different types of authentication. How do we continue to validate not just that I got an email but I also have to authenticate again to make sure this person’s legitimate. I think the burden is really going to be on the end user long term and companies to address how do they not be fooled.

Hall: Alongside that is what is going to be your overall risk appetite in general within the organization, not only as it ties to AI. Part of putting your risk appetite together is asking what is an acceptable level of risk within your organization. Obviously, you want to keep it as open as you can to progress and grow your organization, but you want to add enough guardrails that put the rules and expectations of what your personnel should be following when utilizing it and making sure some of that data doesn’t get leaked on the back end. It’s truly determining what your risk appetite will be toward AI usage and putting that expectation on your internal personnel of what they should be following. 

Are there any other trends you see on the horizon for AI in cybersecurity?

Hall: Another thing to consider is the privacy implications. Specific to Iowa, if your customers are consumer-based where you’re getting a lot of their personal information as some of the privacy laws that go into effect at the beginning of January. If you’re utilizing AI to look at some of those customer records, the question is what controls and protections are you putting in place to privatize that data, either through anonymization or obfuscating that data, but making sure you’re accounting for those privacy implications. Iowa just recently passed that law, but there’s a majority of states that do have privacy laws in place, so there’s more than likely going to be some type of machine learning or AI element to those privacy laws, so making sure you’re up to date with your regulatory frameworks that you need to follow.

https://www.businessrecord.com/wp-content/uploads/2022/12/Sarah-Bogaards12-21-scaled-e1670257797918.jpg

Sarah Diehn

Sarah Diehn is digital news editor and a staff writer at Business Record. She covers innovation and entrepreneurship, manufacturing, insurance, and energy.

Email the writer

oakridge web 120124 2 300x250