Nearly everyone has heard a fraud story from someone they know. A friend’s credit card gets stolen. A relative falls victim to an online scam. Someone’s identity gets used to open accounts they never applied for. Chris D. Sham, COO at faceESign, has spent over 15 years working in AI-driven security and digital identity, and he believes this problem is about to get significantly worse. His solution involves something criminals can’t easily fake: proving you’re actually a real person in real time.
Exploring Curiosity Through Technology
Sham didn’t follow the usual path into technology. “I’ve been interested in and loved technology since I was a kid. I’ve always broken things apart and put them back together, especially when it comes to electronics,” he says. That curiosity about how things work, and the relationship between people and machines, eventually led him into automation and AI. He was exploring these technologies long before they became industry buzzwords. But there has always been another side to his fascination with technology. “I’ve always felt it’s part of my job. It goes back to my story of breaking things and taking them apart. When I look at something, I ask myself, okay, what can happen from this?” That mindset of constantly questioning what could go wrong has guided much of his work in security.
Recognizing the Scale of Digital Fraud
Sham makes a striking observation about how common fraud has become. “There isn’t a single person on the planet who doesn’t have a friend or close companion who has gone through some type of fraud,” he says. Think about that for a moment. If you can’t name someone personally affected, you probably just haven’t asked around enough. The digital age has made fraud exponentially easier to commit. “Fraud increased a thousand times when technology came into existence,” he points out. Before, creating fake documents or forged items required skill and time. “A specialized person had to do it. What I’m getting at is digital fraud. Anybody with a computer or a piece of tech in their hand can do it.”
Most people have heard the term “deepfake” without fully understanding it. Sham explains it simply. “Something that’s fake is a replica. A deepfake dives further. It can mimic and replicate something to the point that it seems real. And that’s what’s scary about the term deepfake.” This technology goes beyond clever imitation. When combined with stolen personal data from breaches, it enables types of fraud that barely existed a few years ago. The tools designed to prevent fraud often rely on static information—background checks, credit reports, address verification. These are standard KYC (Know Your Customer) processes used by banks and financial institutions. But they have a critical flaw. “I have access to that information, don’t I? So that could pose a threat,” Sham explains. Family members, caregivers, or anyone with access to personal details could misuse that data. The information itself doesn’t confirm who is actually behind the transaction.
Three Keys for Building Secure Technology
Sham offers straightforward advice for anyone developing technology that handles user data:
- Collect Only What’s Necessary – Companies should focus on “not collecting data they don’t need, only the vital information.”
- Make It Easy To Use – “You could have the most advanced, earth-shattering, groundbreaking technology that someone could be working on and building. But if it’s cumbersome, if it’s difficult to understand and use, nobody’s going to want it and nobody’s going to buy it.” Brilliant technology that frustrates users won’t succeed.
- KYC Needs To Expand Beyond Banks – “KYC is becoming much more prevalent in the everyday digital space now. Not just banks or major fintechs.” E-commerce sites, subscription services, any platform handling transactions needs ways to verify users are who they claim to be.
Sham puts it simply: you need to “stay in reality.” When AI can fabricate nearly anything, requiring live human verification creates a verification point that’s much harder to fake.
Introducing Real-Time Human Verification
Through his work with faceEsign, Sham developed a new way to think about identity verification. The patented technology uses biometric verification but operates differently from standard facial recognition systems. “I’m doing a live video of you just as I am with you right now, where I can see your face on video. It’s simply a live video of you, and that’s it. There’s nothing else to it,” he explains. The process takes about 10 to 15 seconds. During a transaction, the system records a short live video of the person as they click through and provide consent. What makes it different is that it doesn’t scan or store facial data. “We’re not scanning the imprints of your biometrics,” Sham clarifies.
Privacy concerns have slowed the adoption of biometric technologies in many industries, so avoiding facial data storage removes one of the biggest barriers. The system also adds a layer of protection against AI-generated fakes. Deepfakes can’t be inserted into a live camera feed, and if someone tries using a mask or a video, human auditors can easily detect it when reviewing the footage.
Connect with Chris D. Sham on LinkedIn to follow his insights on digital security innovation.