© 2025
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

FBI Albany cyber squad leader discusses impacts and dangers of AI

Inside the FBI's Albany Field Office
Jim Levulis/WAMC
Inside the FBI's Albany Field Office

The emergence and development of artificial intelligence has impacted many aspects of society: rapid growth in the value of technology companies, entertainment enterprises repurposing the likenesses of late performers and academia trying to ensure work is original and authentic to the author. AI is also being used by bad actors to expand and enhance criminal activity.

Samantha Baltzersen is among those working to prevent and stop those nefarious actions. The supervisory special agent of the cyber squad and task force at the FBI’s Albany Field Office spoke with WAMC about her team’s work and how AI is playing out in the law enforcement sector.

Baltzersen: The cyber task force is comprised of both agents, technical personnel and task force officers. And we investigate both cyber criminal and national security cyber crimes and look to integrate, to leverage consequence on those cyber actors when we have the opportunity.

Levulis: Now, you've been with the Bureau if I'm accurate, about two decades, is that correct?

Yes, that's correct.

And you're obviously in a field, speaking of cyber broadly, that is constantly evolving, right? The technology rapidly changes. But when, when it comes to AI, artificial intelligence, from what you've seen, has this been more of like a seismic shift?

Actually, that's the perfect way to put it. Because in giving presentations, I've said, this is the earthquake in the landscape. This is the game changer on so many levels, because of the ways it can be used. It can be used for the positive for cyber defense, or it could be used for the negative to actually facilitate committing cyber crimes.

To that positive from the Bureau perspective, how might the bureau be using it to investigate, solve, prevent crimes?

Well, right now, actually headquarters is taking a very thoughtful, slow approach to this, and I'm not involved in those conversations. So unfortunately, you're gonna have to bring that down to the DC folks. But, like I said, I think if the cybercriminals are using it, then we're going to need to, possibly on standalone machines, take a look at how they're using it to see if we can see what kind of information we can glean when they're trying to use AI to commit the cyber attacks at a digital level not for fraud, which is entirely different.

Could you go into those differences there?

So in using AI to commit frauds, that's like the grandparent scam. So if I'm a cyber criminal, I could take pictures that your grandchildren are posting on Tik Tok, YouTube, their voices, anything where I can get their audio and video. And then I can have AI generate dialogues. And I can call the grandparent and I have video, I have audio, and they really believe it's their grandchild. And we're actually seeing this happen. We've already had victims where this has happened. So it's really, really terrifying and a new way for the depths to which they can get in impersonating other people to commit cyber frauds and scams. The same way, a CEO could be imitated, or you or I since there is now plenty of audio evidence on radio for us. And attempt to impersonate us as well. So on the other hand, if they're using AI to commit cyber attacks, they're looking at coding malware using AI, they're looking to build backdoors into software that already exists. They're looking to use AI to script scans of a certain type of machine, say a Linux server versus a Windows and find vulnerabilities and then automatically exploit them. So those are the kinds of things that makes it that much easier for the people who aren't coders and who aren't very technical to commit these more complex cyber crimes.

So it's opening it up to a broader base of bad actors?

Absolutely.

The FBI field office in Albany, NY.
Jim Levulis/WAMC
The FBI field office in Albany, NY.

From a law enforcement perspective, what are some of the ways to combat those two different nefarious acts?

Well, one, I think the first for the frauds and scams, it's all about awareness and talking to the community. And we are trying to do that on a regular basis. We were at the New York State cybersecurity conference that's taking place yesterday and today and gave a presentation. There's lots of presentations, they're actually on AI, right now. You know, but especially getting out to different populations of people. We do a good job trying to get out to schools, trying to get out to elder communities, working with our task force partners with the police, state and locals. So I think it's really a concerted effort, I always say cybersecurity is a team sport. And this is where that really comes together, because we need reporting from people to let us know what's happening. So that we can help put out warnings and alerts, and also then go after those bad actors. So I think that's how we fight it kind of on the frauds and scams side. On the cyber side, I think we have to be creative, obviously in a very legal fashion. So making sure that okay, well, how do we think about this infrastructure when we're looking at how it's being attacked? Where do we need to actually get a search warrant for now? Where normally we get a search warrant for somebody's email account, or phone number. Okay, now we're looking at the infrastructure itself that they're using, and how they're using it and getting legal process to gather information, log information and other information from there to see exactly what they're doing.

On the frauds and scams side, I know in the past when discussing this, when they were conducted just normally over the telephone, there were some of those telltale signs that of asking to go purchase say a gift card and give us those numbers. Are those the same when it comes to the use of AI or are there some different indicators that as a member of the public, I should be on the lookout for?

Depending on which AI generator they use, if you attempt to do it yourself, you'll see this as well. It'll speed up voices, it won't be in the proper cadence necessarily. You're gonna have to use a higher end AI to make the voice actually really sound like the person itself. It'll be identifiably their voice. But maybe it's just a little like too fast. Or maybe it's just not the right tone, kind of like that. So that's a telltale sign. Whenever people are asking you to get gift cards or put money into cryptocurrency and there are actually a lot of cryptocurrency ATMs, because there's a branch of gas stations/convenience stores that actually have Bitcoin ATMs in them across the state. So if somebody's asking you to put money into Bitcoin, then that's probably not a great thing. And that's also harder for us to get back. So if a wire takes place, if you send money to somebody through the normal banking system, and you report it to us within 24 to 72 hours and put it complaint into IC3.gov. There's a team at headquarters that automatically attempts to pull back all those transactions through the banking system to get you your money back and 71% of the time, it's successful. But it has to be within that timeframe. And also the complicating factors if it goes overseas, that's a whole nother layer of stuff.

You mentioned some of the changes as it pertains to AI that it's forcing with the search warrants as it pertains to cybersecurity. I wonder more broadly, if when it comes to gathering evidence that might lead to an eventual prosecution, have the laws caught up to artificial intelligence? Or is it like we've seen in the past with certain industries or nefarious actions, they're happening and the laws aren't on the books yet to be able to penalize those?

I think the laws aren't fully there yet, but I'm hopeful that they will be because it seems like there is enough energy on the part of lawmakers to push things forward. New York state and the federal government have already started pushing forward some aspects of AI's application in law, for instance, when it comes to images of children, if people are producing images of children that would constitute CSAM [child sexual abuse] material, even if it's just AI-generated and none of its real, they're still considering that production and we'll prosecute you under the law for that. So that's great news. But I think there's a lot of work on behalf of like tech and lawmakers, because AI itself – it’s like a child, right? Or you know, a petulant adult. What you can do is you look at somebody and say, ‘Don't do this, I want you to code. But whatever you do, don't put a backdoor into this code.’ But then you have people who can talk the AI into putting a backdoor into the code. And it'll still do it, even though it was explicitly told not to. You know, if you read articles in Wired, and Ars Technica, and other tech magazines and stuff like that, you'll see these articles on AI, where AI isn't intentionally trying to break the rules. But, people can eventually get it to do so. For instance, also, building a bomb, same thing. They're explicitly told, don't provide directions on how to build the bomb. But if you ask enough of the right questions, you'll eventually get it to answer the question you need to build the bomb.

I know we've been discussing a lot of things that many people might find, you know, terrifying. You work in this field day in and day out. Is there anything as it pertains to artificial intelligence that particularly keeps you awake at night?

I would say not necessarily any more than just the internet in general. I think technology can be used as an ultimate good or an ultimate bad. And, my hope is really that people, and this is where we have to have a lot of faith in people, that they will want to use these tools for the good and not for the bad. Not to hurt somebody else, not to shame somebody else, not to steal people's money. And so I think it's terrifying what it can be used for. I don't like it, but I'm already sufficiently unhappy with the way technology can be used against people.

Jim was WAMC’s Assistant News Director and hosted WAMC's flagship news programs: Midday Magazine, Northeast Report and Northeast Report Late Edition.
Related Content