Technology has always advanced more rapidly than policy. However, over the past two years, we’ve passed a tipping point: artificial intelligence is now reshaping cybersecurity as fundamentally as it is reshaping business.
For finance leaders, travel teams, and business decision makers, that means the stakes and opportunities are higher than ever to ensure their AI tools aren’t weaponized to impersonate, phish, or leak data in seconds.
This Cybersecurity Awareness Month, I’ll explore four emergent risks—and how Emburse is evolving to stay ahead.
1. Deepfakes & Voice Cloning: When “Pause, Then Verify” Becomes Table Stakes
It’s now possible to clone a person’s voice with just a few seconds of recorded audio. That means a well-placed phone call — a tactic known as vishing, or “voice phishing” — can sound exactly like your CEO or CFO, tricking employees into approving fraudulent transfers or sharing confidential data. In one reported case, a Hong Kong firm transferred $25 million after a fraudster committed identity theft by posing as the CFO on a video call.
Deloitte also estimates that generative AI–enabled fraud could cause $40 billion in losses in the U.S. by 2027 (up from $12.3 billion in 2023).
How Emburse Responds
At Emburse, we layer extra verification into our high-risk financial workflows: transaction thresholds trigger live verification, and voice-based requests (even from “trusted” accounts) require independent confirmation.
But detection policies only go so far. We also actively coach our teams to pause first—no matter how credible a request sounds.

2. Prompt Injection & AI Governance: Protecting the Boundaries of What AI Can Do
Generative AI is a powerful enabler—but it also introduces fresh potential vulnerabilities. Chief among them is prompt injection, in which attackers craft input that subverts the model’s instructions, causing it to reveal confidential information or execute unintended behavior.
The OWASP GenAI project lists prompt injection as its top risk for large language models. In fact, studies such as Systematically Analyzing Prompt Injection Vulnerabilities show that 56% of tested prompt attack strategies succeeded across various LLM architectures.
How Emburse Responds
Because prompt injection cyberattacks can be subtle and stealthy, we operate with a "zero trust for AI input" philosophy. All external inputs to any AI system are sanitized, validated, and quarantined when uncertain. We subject new AI capabilities to red-teaming (simulated cyberattacks), enforce role-based access control, and deploy layered defense to shield downstream systems from rogue AI system instructions.
3. AI-Powered Phishing: When Adversaries Scale Creativity
AI is turbocharging phishing attacks. Attackers can now produce personalized, fluent, credible emails at scale, making traditional filters and training less effective. According to reporting by cybersecurity experts Hoxhunt and SOCRadar, since ChatGPT’s public debut, phishing volume has exploded by over 4,000%.
In controlled tests, AI-crafted phishing emails outperformed elite human-written ones—achieving 24% higher success at tricking recipients. That’s a game-changer for defense strategy and poses significant challenges.
How Emburse Responds
Our approach at Emburse is twofold:
- Adaptive threat detection: We incorporate AI models that analyze email semantics (tone, sender behavior, context) rather than relying solely on signature or heuristic blocking.
- Culture over compliance: Our phishing training isn’t a checkbox—it’s ongoing. We run dynamic simulations, encourage reporting, and foster a shared mindset: If it feels odd, check twice.
For finance teams, this means never letting the legitimacy of a sender override internal verification protocols—especially for requests involving funds, vendor changes, or credential resets.
4. Shadow AI: The Hidden Risk in Everyday Tools
Shadow AI refers to employees using unsanctioned generative AI services (e.g., ChatGPT, writing assistants) to bypass workflow friction. The risk is that sensitive data may leak outside controlled environments without anyone in IT noticing suspicious activity.
According to the 2025 ManageEngine “Shadow AI Surge” report:
- 97% of IT leaders see significant risks in shadow AI, and 91% of employees think the risk is low or manageable
- Meanwhile, 60% of employees admit to increased use of unapproved AI tools year-over-year
- Also, 93% conceded they’ve input sensitive information or work data without approval
Another insight: more than 80% of tech leaders say AI adoption is outpacing their ability to safely vet tools—and about 37% of employees admit they’ve used internal data (customer, financial, operational) in shadow AI technology.
How Emburse Responds
But rather than outright bans—which often drive usage further underground—Emburse advocates a governed adoption model. We publish clear usage policies, categorize acceptable vs restricted data for AI ingestion, and roll out internal privacy tools with built-in data masking and activity logging. In parallel, we monitor AI usage patterns and flag anomalous behaviors that may require intervention.

From Risk Awareness to Resilient Trust
Cybersecurity now goes beyond blocking cyberattacks. It’s about safeguarding trust in your sensitive data, your workflows, and your people. For finance and operations leaders, trust is the foundation of every transaction, every approval, every dollar moved.
At Emburse, we don’t treat security as an isolated function. It’s part of how we design, build, and operate Emburse Expense Intelligence: adaptive controls, intelligent orchestration, and human-aware decision paths built in.
As AI evolves and cybercriminals use it more creatively, our best defense to these emerging threats is vigilance, governance, and skepticism. Awareness is the start. What we create next—and how responsibly—will shape the future of digital trust.
Request a personalized demo and explore what AI-driven travel and expense management can mean for your organization.