Humanly undertook a research and development project focused on social media and personal reputation protection, working with an anonymous public figure to address the growing risks posed by manipulated and AI generated digital content.
Public figures, high profile individuals and their teams increasingly face threats that originate on social platforms and informal digital channels. Images, screenshots and documents can be rapidly shared, altered or fabricated to support false narratives, impersonation attempts or reputational attacks. In many cases, content spreads faster than it can be verified, creating immediate personal, legal and professional risk.
The objective of the project was to explore how authenticity detection could be applied to unstructured, high risk content typically encountered on social media and messaging platforms. Unlike regulated workflows such as insurance or healthcare, these environments lack consistent controls, yet the consequences of relying on false content can be severe.
The project focused on developing a personal protection model using Humanly’s General Authenticity Protection Model. This model was selected because the content under review did not fit a single industry category and often involved highly contextual, emotionally charged or time sensitive scenarios.
Examples of risk scenarios included manipulated images presented as evidence of behaviour or events, fabricated screenshots used to support harassment or coercion, and AI generated content designed to impersonate the individual or misrepresent their actions. In each case, decisions needed to be made quickly about whether content should be escalated, challenged, ignored or referred for legal or platform intervention.
Humanly’s role was to assess whether submitted content showed indicators of editing, manipulation or synthetic generation. This allowed the individual and their advisors to make more informed decisions before responding publicly or taking further action. Importantly, the system did not assess intent, truthfulness or narrative context. It focused solely on the integrity of the digital content itself.
The project highlighted how personal protection requirements differ from enterprise fraud prevention. Risk tolerance is lower, timelines are shorter and the impact of false positives or delayed decisions can be significant. As a result, the General Model was designed to prioritise clarity, transparency and confidence signals rather than automated outcomes.
This R&D work demonstrated how authenticity assessment can support personal safeguarding in an environment where trust is increasingly fragile. While the project focused on an individual public figure, the findings are applicable to a wider range of use cases including executives, journalists, activists, content creators and organisations managing brand or reputational risk.
As social media platforms continue to evolve and AI generated content becomes more convincing, the ability to verify what is real before reacting will become an essential component of personal and organisational digital safety. Humanly’s General Model provides a flexible foundation for addressing these emerging risks.
Challenges
Rapid spread of manipulated content across social platforms
High personal and reputational impact from false digital evidence
Lack of structured verification controls in social media environments
Increasing use of AI generated images for impersonation
Time sensitive decisions required before public response
Solutions
