How AI Agents Will Fool You in 2026

Sumsub for Experts4,019 words

Full Transcript

[music] [music] Quick question. If AI can browse the web, read your email, and click buttons like a human, how long until someone uses that to scam you [music] at scale? In this video, we'll break down what AI agents really are, what people are using them for right now, and how they'll fool everyday users in [music] 2026. Hi everyone, I'm your new host, Nicholas Harikov, head of technical pre-sales here at Sums Up. Welcome to Sumsup for Experts. On this channel, we unpack what's [music] happening in fraud, compliance, and risk, and what it means in the real world. Let's start with the basics. what AI agents are and why we believe that in 2026 [music] there will be a significant jump from chatbot apps dominating the market to AI [music] agent applications becoming much more prominent and why that changes everything. When AI applications like Chad GBT [music] blew up a few years ago, we started seeing the word AI slapped onto everything. [music] AI toasters, AI water bottles, it became background noise and people stopped caring as the definition got too blurry. [music] And now that the hype has matured, there's a new buzzword taking over. AI agents. And if you've been seeing agents everywhere and thinking, "Okay, but what does that actually mean?" You're not alone. I think that [music] if you ask most individuals who actively participate in conversations about AI agents for the definition of AI agents, you will get some variance in the responses for sure. Everyone is raving about AI agents. Tech companies are building them into their products in order to avoid looking like they are behind the curve. Investors are pouring billions [music] into them. And if you've been anywhere near a tech headline in the last six months, you have seen the phrase. But what actually are they? How are they different from the Chad GBT you already know? And maybe [music] more importantly for the things that we care about here at Sumsup, what possibilities do they open up for scammers, fraudsters, [music] and criminals? That's what this video is about. So, let's start from the top. And I promise we'll keep it practical. No fluff, no buzzword bingo. [music] You've probably used AI apps already. Chad Jubet, Gemini, Claude, whatever. You type something in it, it types something back. That's a chatbot. It responds to you. [music] It's reactive by nature. It doesn't do anything on its own. It [music] waits for you, the user, and then responds. End of the story. An AI agent is fundamentally different in that it [music] is able to act for you. It can browse the web, click buttons, fill out forms, read your email, check your calendar, compare prices across 20 different websites, [music] and even make purchases. All of that while iteratively learning from its experiences to understand how [music] to better achieve the goal that you gave it with the tools that it has. Think of it [music] this way. A chatbot is like texting a really smart friend for advice. An AI agent is like hiring a personal assistant who [music] actually goes and does the thing all without you lifting a finger. And the promise of this technology, the convenience that it can bring and in the hours that it can unlock in our calendars [music] is exactly why they're going mainstream and exactly why they're going to be [music] abused. And because I work on the technical side at Sumsub where we deal with identity risk and [music] fraud patterns every day, I'll also connect this to what it means for trust and verification. All right, definitions first [music] because the internet is already getting this wrong. A key confusion right now is agent washing. Vendors calling basic assistants agents. Gartner explicitly flags this misconception. [music] Assistants simplify tasks but depend on human input and don't [music] operate independently the way that agents do. So if you used a smart assistant and thought, "Well, that's an agent." Maybe, but maybe not. The difference does [music] matter. Agents are becoming much more widespread across businesses, but also personal use. >> [music] >> The most important recent signals are McKenzie saying that they have deployed over 25,000 agents internally. OpenAI launching Frontier, [music] an enterprise level AI agent platform for managing already deployed [music] AI agents across an organization's systems. And of course the [music] numerous partnerships announced between anthropic and very large enterprises that all aim to enhance AI capabilities of set organizations both [music] agentic in nature and not. All of these news and announcements matter because it is an acknowledgement that AI agent use in an enterprise setting is becoming normalized. So why should [music] you care as a consumer or as an everyday person? Well, it's because the same patterns that make agents useful at [music] work or anywhere really, delegation, autonomy, access to systems, also create new ways [music] to impersonate, steal access, and automate scams, which can then affect any [music] one of us in our everyday lives. I'll give you an example of how rapidly the agentic AI field is advancing. A recent use [music] case called computer use, where agents can click and type like a human, is already going mainstream. The key capability behind this is UI automation. In simple terms, [music] the agent can use a computer the way that you do. It sees the screen, moves the mouse, and types. Anthropic has introduced a computer [music] use tool that enables screenshot capture, mouse control, keyboard input, and full desktop automation. Microsoft's co-pilot studio includes similar functionality, allowing agents to treat websites and desktop applications as [music] tools, especially in situations where no API exists. Microsoft frames [music] this as the next leap beyond traditional fragile RPA systems. Meanwhile, Google has announced Gemini computer use model capable of navigating the browser [music] like a human, filling out forms and interacting with web pages integrated with Project [music] Mariner, AI Studio, and Vert.Ex AI. If it's not immediately obvious why AI agents [music] play a big role in this conversation, it's because it lowers the barrier to automate any workflow. [music] And now that computer use is an AI, meaning its ability to read the user interface of most applications has also become a norm. This means that attackers can automate [music] the use of virtually any application, spreading their malicious AI agent use to places which were previously protected from [music] harmful use. Now look at how quickly this landscape is shifting. And pay attention to how recent all this is. You probably noticed that the space is evolving at pregnant pace. But everything that we've been talking about so far has been predominantly focused on enterprisegrade [music] use of AI agents. However, we would be remiss if we did not talk about the most important AI agent phenomenon that is happening right now. Openclaw. If you've not been following the AI agent space, then allow me to clue you in very briefly. Openclaw is an open-source project positioned as the AI that actually [music] does things. What it actually is is an AI assistant that you run on your machine connected to the LLM model of your choice. [music] Whether that be one from OpenAI or from Antropic or elsewhere. Because it lives on a local computer, it means that once granted access to your everyday tools like email, calendars, the internet, and more, [music] it can actually do things for you without you having to constantly interact with an application that is open on your computer and other clunky user experience nuances that made [music] other AI assistants feel kind of clunky and unnatural. There's [music] a lot to talk about with OpenClaw, but the main reason why I wanted to bring it up is that OpenClaw marked a [music] huge shift in AI assistance. And now that literally hours ago, as of recording of this [music] video, OpenC Claw's creator Peter Steinberger has joined OpenAI, all of this means is that tools [music] like OpenClaw, AI assistant, in other words, will be made much more userfriendly to nonsoftware engineers and techsavvy [music] users. meaning most of us will likely be using an AI assistant very very soon. Now that we've established the landscape of AI agent tools, let's talk about their dark side. The scariest thing that AI agents enable is unprecedented scale. Given that AI agents can be [music] used for virtually anything, the scale can be related to the work that a single individual does [music] or the damage that they do. AI agents are an incredible technology. However, what [music] it's used for dictates whether it brings harm or good. But if you give something autonomy and access, you're also giving it the ability to do damage [music] faster than any human can. The fundamental problems of AI agents that lead to vulnerabilities is that [music] these agents have high autonomy and high access to your sensitive data. They occupy the most dangerous quadrant of the security risk [music] matrix. Open AI's own CISO acknowledged that prompt injection remains an unsolved security problem. So now that we understand what AI agents are at a very high level, let's talk about their adoption [music] and how it will impact their everyday user online. The ways in which AI [music] agents and their widespread use will fool you are different. Some use [music] cases will be more of the same stuff that we've seen before, but some will exploit the foundational vulnerabilities that these AI agents and the [music] tools that use them possess. Let's start with something basic that we have already heard of and look at how AI agent use changes these [music] known scams, emotional scams. Okay, so starting us off with perhaps something uncomfortable yet something that we've already seen before. I'm not sharing this to scare you. I'm sharing it so that you can recognize the pattern [music] and protect yourself. Specifically, we're talking about emotional scams like voice cloning or romance scams, which might be the most emotionally devastating and fastest growing AI enabled scams right now. And it's not a fringe problem anymore. According to Resemble AI, voice cloning [music] scams surged 148% in 2025. [music] Around the same time, deep fake enabled fraud passed 200 million in quarter 1 of 2025 alone. [music] As you might know, scammers only need 3 seconds of audio in many cases, and that can be enough to clone someone's voice convincingly. Once they [music] have it, they will use it to impersonate specific situations which take advantage of natural human emotion. I'm talking panic, [music] distress, desperation, shock, dread, you name it. the kinds of emotions that warrant a response from your loved and close ones. There's a real case that shows how brutal this can be. Sharon Brightwell in Florida got a call from someone who sounded exactly like her daughter crying about a car accident. In the moment, [music] it felt real, urgent, emotional, but also chaotic. She ended up handing 15,000 in cash to a [music] courier, but the call, the emotion, the urgency, it was all entirely AI generated. Another example is an elderly couple [music] in Alabama who received the call from their greatgrandson saying that he was bleeding and needed bail money. [snorts] Again, an AI clone voice pushing them into immediate action before they could verify anything. And the barrier to entry keeps dropping. [music] Voice cloning tools and other deep fake generation tools are now available for free or [music] just for a few dollars. And that's why agencies like the FBI, FTC, and Europole have all issued warnings. And Crowd Strike documented a 442% rise in fishing, also known as [music] voice fishing, across 2024 to 2025 alone. And unfortunately, voice cloning is just one piece of a much larger shift. Because if voice scams [music] hijack panic then next category hijacks something even more powerful and that's [music] emotional connection. We used to talk about romance scams as a laborintensive [music] human effort. In 2026 we are witnessing industrialized emotional engineering. Scammers are deploying agentic teams that manage thousands of parallel relationships. These economic sovereign agents with their own identities and personality [music] files named something like soul.md that allow them to maintain perfectly consistent high empathy personas over months. A single attacker [music] can now generate a personal agentic GDP automated revenue from [music] thousands of victims using evaluator agents that analyze shot histories to find the exact moment a victim is most susceptible to a pig butchering [music] investment. Roman scams in our deep fakes aren't new. What's [music] new and enabled by agentic AI is the scale and [music] the sophistication. Researchers estimated that 87% of the labor involved in these scams can now be automated or heavily assisted by AI. Think about what that changes. In the [music] past, scammers were limited by language barriers, time zones, and sheer human bandwidth. But today, [music] AI enhances tone. It translates flawlessly into any language, analyzes full chat histories, and [music] generates emotionally calibrated replies. And that's the real pattern we're seeing. It's not just that scams are getting smarter. It's that they're getting automated. [music] And when automation meets personalization, you get the next wave. Okay, now that we've gotten the traditional and scary [music] AI scams out of the way, let's get more AI agent specific. Up until now, most conversations [music] about AI scams have focused on content. Fake videos, fake voices, fake [music] emails. But in 2026, fake content isn't really the real danger anymore, even though it [music] still is out there. But it's delegated intelligence. Because the moment that you give an AI agent permission to act [music] on your behalf to browse, compare, negotiate, summarize, purchase, [music] schedule, or respond, you're now outsourcing judgment. And that changes everything. The way that AI agents will fool you in 2026 [music] won't feel like traditional scams. They won't always look dramatic. They won't always feel or urgent. In many cases, you won't even realize you were fooled. But let me show you how. [music] The first is delegated authority abuse. When your AI assistant makes the wrong decision correctly, or at least assumingly correctly. In the past, attackers had to convince you [music] to do something. Now they just need to influence your AI assistant. Your agent [music] might compare insurance plans, negotiate a refund, rebook a flight, choose a vendor, execute a purchase. And while all [music] those things are fantastic, it's acting within permissions that you gave it. And that's [music] the key. So once again, the manipulation doesn't happen through panic. It happens through slight tweaks and optimizations. Imagine you ask your assistant, [music] "Find me the best short-term health insurance plan." Your agent scans dozens of websites, reads reviews, compares pricing [music] models, but one of those sites just so happens to be designed not for humans, but for AI crawlers. It contains subtle [music] signals, biased evaluation framing, hidden waiting instructions, structured metadata [music] designed to influence ranking, and then your AI reads it. It adjusts [music] its internal scoring and it confidently recommends a plan that is objectively [music] worse for you but optimized for the attacker's outcome. And that's it. [music] There was no emotional manipulation. There was just slight computational manipulation that led to an outcome that [music] you did not want. You didn't get tricked. Your optimizer did. Number two, memory manipulation. When your AI remembers the wrong things permanently. A key feature of AI agents is that they remember. They store things like your [music] preferences, your habits, your past approvals, your trusted [music] sources, your behavioral patterns. If you listen to most [music] of the clips and sound bites on Open Claw on any AI agent assistant or tool and of course even ones from the creator of OpenClaw himself, they always mention how useful it is that an AI assistant [music] remembers pretty much everything about them. This makes them powerful and so unbelievably useful, [music] but it also makes them vulnerable. If an attacker can insert false information into your agents long-term memory, even once that memory persists, for example, a malicious site subtly suggests [music] user prefers vendor X. The user has previously approved the subscription. User considers the source trustworthy. Your AI stores it. Weeks later, you ask, "Should I go with vendor A or vendor B?" Your agent references its memory. It [music] says, "You've previously shown preference for vendor X, except you never did, and now your AI is reinforcing [music] a lie with confidence as it goes through everyday life. In 2026, fooling you won't always mean changing [music] your mind. It may just mean changing what your AI remembers about you." Number three, synthetic [music] identity swarms. Humans trust consensus. If one person tells you something suspicious, [music] you hesitate. If three independent people confirm the same story, your guard will probably drop. AI agents allow [music] attackers to fabricate consensus at scale. Imagine this. You meet someone online. They seem intelligent, [music] consistent, attentive. Later, a financial advisor independently confirms the same opportunity. Then, a customer service rep verifies the legitimacy of the platform. Different names, different tones, [music] different writing styles, but all of them are AI agents orchestrated by a single operator. They reference [music] each other, they validate each other, they reinforce each other. It feels like multiple points of proof, but in reality, it's just one coordinated [music] system. You're being overwhelmed by synthetic agreement. This is the shift from individual scams [music] to fabricated social environments. And humans are not wired to [music] detect coordinated artificial consensus. We're wired to trust it. Number four, emotional microtargeting engines. You are not just being [music] personalized nowadays, you're being modeled. The final layer is the most subtle. AI agents in 2026 don't just generate convincing [music] texts like scammers used to do when running romance and social scams. They model you. They analyze your response speed, your hesitation patterns, your language [music] tone, your risk tolerance, your political framing, and of course, even your emotional volatility. [music] They run adaptive persuasion loops where if you hesitate, the [music] tone softens. If you respond too quickly, urgency increases. If you're analytical, more data appears. If you're emotional, more narrative appears. It's dynamic psychological calibration to each single [music] person that the AI agent is modeling. And because it's automated, it happens [music] across thousands of simultaneous conversations with continuous optimization. The system learns which emotional arc to take you on and not on you alone, but on everyone. In the past, scammers relied on instinct, but now they don't have to [music] because you are being behaviorally tuned. Let's look at what these four shifts have in common. We have delegated authority, persistent memory, coordinated synthetic [music] identities, and real time emotional modeling. This is a completely new [music] shift. This isn't about better deep fakes, even though those are being vastly enhanced by AI agents, but rather it's about trust being redirected through [music] software. In 2026, you won't always be fooled by something that looks fake. You'll be fooled by something that will be optimized [music] and will take advantage of technology that you can trust and something that acts on your behalf. So the real question in 2026 isn't [music] can I tell what's fake. It's can I trust my own personal assistant what he's doing on my behalf. That's the shifts [music] and that's how AI agents will fool you. Automation plus access. Which leads us to the bigger question. So what's next? I think that in reality, one of the biggest shifts is that a ton of traffic online will be from AI agents. That's something that most companies and services [music] will just have to deal with. Identifying what is good automation, like an agent acting on someone's behalf, doing their [music] monotonous tasks for them, and what is bad automation, like fraudster attempting to defraud businesses, [music] is going to become the name of the game. AI agents can act autonomously for good or harm if left unverified. And that's exactly why we at Sumsup are introducing our know your agent aka KYA framework which helps verify that behind an automated action is a real verifiable identity. With AI agent verification, Subsup [music] is the first to bind AI agents to verified human identities at [music] scale. Rather than attempting to blindly trust AI agents or block them outright, our solution focuses on verifying the humans behind them [music] so that proper AI agents can get access to the digital services and the digital world. AI agents [music] are becoming embedded in everything. How we shop, how we bank, how we communicate, how businesses operate. Even McKenzie estimates that agent-driven [music] systems could mediate trillions of dollars in global commerce by 2030. New protocols are being built right now to let AI agents negotiate, transact, [music] and interact with each other directly. AI agent authentication and authorization are foundational [music] for deploying autonomous AI safely, responsibly, and at scale. We're moving towards a world where [music] your AI may talk to a company's AI and you might not even see the exchange. [music] And here's the tension. The capabilities are accelerating fast. The safeguards are improving, but they're still catching up. Even leading security researchers acknowledge that prompt injection and agent manipulation remain open [music] challenges. So for now, awareness matters because in the age of AI agents, the biggest risk isn't that the [music] technology exists. It's assuming that the old rules of trust still apply. So what can you actually do? Let's make this practical. Because after everything we just covered, it can start to feel overwhelming, but it shouldn't. The goal isn't to disconnect from technology. It's to upgrade how you relate to it. AI agents are not going away. They're becoming infrastructure. [music] So you should think of how to use them without outsourcing your judgment. Let's [music] break this down. The first thing is treat delegation like financial risk. If your AI can spend money, [music] log into accounts, execute transactions, negotiate on your behalf, then it [music] deserves the same risk boundaries as a financial advisor. That means limit permissions, separate high-risisk actions from low-risk tasks, [music] avoid giving single agent access to everything, and review transaction histories [music] regularly. Number two, verify before you escalate. In 2026, proof is degraded. Voice isn't proof, video isn't proof, even email threats can be fabricated. So instead of asking, does this look real? Start asking, has this been independently verified? [music] If a financial opportunity is introduced, a payment is requested, an urgent claim is made, slow the process [music] down, use out ofband verification, call back on a known number, confirm through a separate channel, involve a second human being. Speed [music] is the attacker's advantage. Deliberate friction is your advantage. And number three, reclaim [music] deliberate decision points. One of the biggest risks of AI agents is small decisions executed [music] quietly but at scale. Subscription renewals, policy updates, vendor changes, [music] default setting changes. So create deliberate checkpoints. For example, require manual approval for purchases [music] above a threshold. Delay large transfers by 24 hours. Review AI made decisions [music] weekly. Use multiffactor authentication everywhere. And last, but [music] certainly not least, disable auto execution for high impact categories. You don't need to micromanage everything that your AI system does, but you should know where autonomy stops [music] and you begin. So, that's it. If this breakdown helped you understand where things are heading, subscribe to Sums Up for Experts. We'll continue unpacking how AI, fraud, and digital identity are reshaping the landscape and what that means for all of us. I'm Nicholas Harnov, head of technical pre-sales at Sums Up. Thank you for watching and [music] I'll see you in the next one.

Need a transcript for another video?

Get free YouTube transcripts with timestamps, translation, and download options.

Transcript content is sourced from YouTube's auto-generated captions or AI transcription. All video content belongs to the original creators. Terms of Service · DMCA Contact

Recent Transcripts

Browse transcripts generated by our community

НАРУТО БЫЛ ЛОХОМ НО получил силу БОГОВ и САСКЕ ТЯН Все части Наруто Альтернативный Сюжет Татсу

НАРУТО БЫЛ ЛОХОМ НО получил силу БОГОВ и САСКЕ ТЯН Все части Наруто Альтернативный Сюжет Татсу

Tatsu What If32,576 words
Naruto Who Was Betrayed By The Clan, But He Will Take Revenge On Everyone | What If Naruto

Naruto Who Was Betrayed By The Clan, But He Will Take Revenge On Everyone | What If Naruto

Tatsu What If10,557 words
Naruto - Grandson Of Tobirama | What If Naruto | Female Kakashi

Naruto - Grandson Of Tobirama | What If Naruto | Female Kakashi

Tatsu What If17,215 words
What If Naruto Gets Into The Village Of Fog at the age of 5 | What If Naruto |The Movie Naruto Harem

What If Naruto Gets Into The Village Of Fog at the age of 5 | What If Naruto |The Movie Naruto Harem

Tatsu What If24,868 words
PN Junction Diode (No Applied Bias)

PN Junction Diode (No Applied Bias)

Neso Academy2,338 words
Intrinsic and Extrinsic Semiconductors

Intrinsic and Extrinsic Semiconductors

Neso Academy2,025 words
What If Naruto Is A Great Uzumaki With A Harem | What If Naruto | Naruto Harem Lemon

What If Naruto Is A Great Uzumaki With A Harem | What If Naruto | Naruto Harem Lemon

Tatsu What If13,435 words
গরমের তীব্রতা আরও বৃদ্ধি পেতে পারে: আবহাওয়া অফিসের পূর্বাভাস | High Temperature | Independent TV

গরমের তীব্রতা আরও বৃদ্ধি পেতে পারে: আবহাওয়া অফিসের পূর্বাভাস | High Temperature | Independent TV

Independent Television231 words
কয়েক জেলায় বইছে তাপপ্রবাহ; সর্বোচ্চ তাপমাত্রা চুয়াডাঙ্গায় | Hot Weather | Jamuna TV

কয়েক জেলায় বইছে তাপপ্রবাহ; সর্বোচ্চ তাপমাত্রা চুয়াডাঙ্গায় | Hot Weather | Jamuna TV

Jamuna TV8 words
এপ্রিলে তীব্র তাপপ্রবাহের আভাস, তাপমাত্রা উঠতে পারে ৪২ ডিগ্রি | Heatwave Alert | NTV News

এপ্রিলে তীব্র তাপপ্রবাহের আভাস, তাপমাত্রা উঠতে পারে ৪২ ডিগ্রি | Heatwave Alert | NTV News

NTV News112 words
GET UP AND GRIND - Motivational Speech

GET UP AND GRIND - Motivational Speech

Ben Lionel Scott66 words
5 Github Repos That Will 10x Your Next Claude Code Project

5 Github Repos That Will 10x Your Next Claude Code Project

Chase AI395 words
তাপমাত্রা বাড়ার আভাস | Weather | Temperature | Ekushey TV

তাপমাত্রা বাড়ার আভাস | Weather | Temperature | Ekushey TV

Ekushey Television - ETV209 words
This MVP Race Makes No Sense

This MVP Race Makes No Sense

JxmyHighroller2,928 words
New UX/UI Trends You Can't Miss! - Huge Illustrations, Apple Mascots, Change in Careers & More

New UX/UI Trends You Can't Miss! - Huge Illustrations, Apple Mascots, Change in Careers & More

Punit Chawla1,685 words
Claude Code + RAG-Anything = LIMITLESS

Claude Code + RAG-Anything = LIMITLESS

Chase AI3,949 words
What if Naruto Is An Abandoned Genius With A Harem | What If Naruto | Naruto Harem Lemon

What if Naruto Is An Abandoned Genius With A Harem | What If Naruto | Naruto Harem Lemon

Tatsu What If19,796 words
DISCIPLINE - Motivational Speech

DISCIPLINE - Motivational Speech

Ben Lionel Scott28 words
Notbremse: Trump schmeißt halbe Regierung & Duzende Generäle raus! Grund ist unfassbar!

Notbremse: Trump schmeißt halbe Regierung & Duzende Generäle raus! Grund ist unfassbar!

Vermietertagebuch - Alexander Raue1,448 words
Naruto and Kushina Are A Hot Couple | What If Naruto | Naruto Married With Kushina Lemon | The Movie

Naruto and Kushina Are A Hot Couple | What If Naruto | Naruto Married With Kushina Lemon | The Movie

Tatsu What If34,886 words