For more than fifteen years, we have actively championed the use of artificial intelligence in business. Used well, AI has been genuinely transformative. It has improved efficiency, reduced friction and enabled organisations to do things that were previously impossible at scale.
But something has shifted.
AI is no longer just a tool for productivity. It is increasingly being weaponised, not through science fiction scenarios, but through everyday misuse that quietly erodes trust, creativity and decision making across society and industry.
The danger is not intelligence itself. The danger is unquestioned convenience.
When speed replaces judgement
We are rapidly normalising the acceptance of AI output without scrutiny. In creative industries, experienced professionals are being displaced not because their work lacks quality, but because AI can produce something faster. Volume is winning over substance.
This same dynamic is now appearing across far more serious domains.
Retailers are paying out thousands of small claims supported by images that are never fully questioned. Insurers are reviewing claim evidence at speed, knowing that investigation often costs more than settlement. Banks and lenders are onboarding customers based on digital documents that may never have existed in the real world. Healthcare systems are beginning to see manipulated records and images used to obtain access to high demand medication.
In each case, the problem is not malicious intent by default. It is the assumption that digital evidence is genuine, simply because it looks convincing.
AI has changed that assumption.
The personal moment that crystallised it for me
My own turning point came somewhere far closer to home.
I watched my children use AI to complete their homework in five minutes flat. On the surface, it was clever. Efficient. Even impressive. They were quickly back to their games, task complete.
But something about it felt wrong.
I would rather they struggle, question, explore and occasionally fall short than bypass thinking altogether. The process of discovery, communication and effort is where capability is built. Convenience that removes those steps does not empower us, it numbs us.
That same pattern now exists at scale.
If we outsource thought, expression and verification entirely to AI, we do not just use it. We begin to align with it. Originality flattens. Curiosity diminishes. And the imperfect, human “wrong turns” that drive innovation quietly disappear.
From convenience to exploitation
While many people use AI harmlessly, others are already exploiting it deliberately.
We are seeing manipulated images used to support motor insurance claims. Reused and edited photographs appearing across property and domestic retrofit grant submissions. Synthetic documents being used in mortgage fraud, visa applications and account onboarding. AI generated healthcare evidence supporting claims and access requests that would not stand up to real world scrutiny.
In retail and logistics, small claims for cracked televisions or broken vases are often easier to refund than investigate. At scale, this creates a system where fraudulent behaviour is rewarded simply because the evidence looks plausible and customer expectations demand speed.
This is not theoretical risk. It is already happening, quietly, across sectors.
And it will scale faster than any human review process can keep up with.
The trust problem no one is talking about
The real issue is not fraud alone. It is trust.
Modern systems depend on digital evidence. Images, documents and records are now the basis for financial decisions, public funding, healthcare access and personal reputation. When that evidence can be created or altered without friction, trust collapses unless new safeguards are introduced.
Human review alone is no longer sufficient. The human eye was never designed to spot subtle synthetic artefacts, reused pixels or AI generated patterns at scale.
That does not mean humans should be removed from the process. It means they need better tools.
Why authenticity matters more than ever
This is where authenticity becomes a critical control, not a philosophical concept.
Being able to assess whether content is real, edited or AI generated allows organisations to apply proportional judgement. Not every claim needs investigation. Not every submission is fraudulent. But knowing which ones carry higher risk changes everything.
It protects honest customers, preserves service speed and prevents the silent accumulation of loss that eventually leads to stricter policies and worse outcomes for everyone.
It also protects something less tangible, but equally important. Confidence.
When trust in digital evidence disappears, every decision becomes slower, more expensive and more adversarial.
That is not a future any sector wants.
Building an authenticity filter for the AI era
This is why I built TruePixel, now Humanly.
Not to oppose AI, but to protect people and organisations from its misuse. To provide an authenticity filter that helps distinguish between human created, edited and synthetic content before it is relied upon.
Across insurance, identity, healthcare, retail, property and personal protection, the goal is the same. Preserve trust in digital systems while allowing innovation to continue responsibly.
I call this approach retro humanity. Not rejecting technology, but ensuring it augments human judgement rather than replaces it. Preserving originality, emotion and accountability in a world increasingly shaped by automation.
The path forward
AI will continue to advance. That is inevitable. Fraud will scale alongside it. That is also inevitable.
What is not inevitable is accepting a world where authenticity no longer matters.
The organisations that thrive will be those that recognise this shift early. Those that invest in trust, evidence integrity and decision support rather than blind convenience. Those that understand that speed without confidence is not progress.
The weaponisation of AI is not coming. It is already here.
The question is whether we choose to see it, and whether we build the safeguards needed to protect what still matters.
“When trust in digital evidence disappears, every decision becomes slower, more expensive and more adversarial.”







1 Comment
Adam Bujega
Good article