Artificial intelligence has become the primary gatekeeper for who appears as an expert online, who is considered credible, and who is quietly ignored. In response to this shift, Dr. Tamara āTamiā Patzer has introduced the AI Suggestibility Scoreā¢, a new metric designed to measure how likely it is that an AI system will select, elevate, and trust a specific professional.
The AI Suggestibility Score⢠is a central component of Patzerās broader FirstAnswer Authority Systemā¢, a framework that explains how modern AI models evaluate identity rather than just content.
āAI no longer acts like a neutral index of information,ā Patzer said. āIt evaluates identity. Suggestibility has become the new visibility. If AI does not find you suggestible, it does not select you, no matter how good your work is.ā
The score examines machine-readable identity signals, cross-platform consistency, corroborated credentials, and trust patterns to determine whether AI systems are likely to treat a given expert as a reliable source.
The launch of the AI Suggestibility Score⢠comes at a time when journalism organizations and AI platforms are both focused on identity, verification, and trust. In 2025, the Poynter Institute, Columbia Journalism Review, Nieman Lab at Harvard, the International Fact-Checking Network, the American Press Institute, the Trust Project, the News Literacy Project, the Knight Foundation, the Reuters Institute for the Study of Journalism at Oxford, and UNESCOās media integrity programs all highlighted the growing risk of identity confusion and misattribution in an AI-driven information ecosystem.
At the same time, major AI systems have increased their reliance on structured identity and authority signals. Search and conversational platforms now place more weight on which person or organization appears to be the most stable, visible, and corroborated entity associated with a name or topic.
āJournalism is tightening its standards for identity and sourcing at the same time AI systems are tightening theirs,ā Patzer said. āThe people who do not have a clear, machine-readable identity are the ones who disappear first.ā
A key risk the AI Suggestibility Score⢠surfaces is what Patzer calls Identity Collisionā¢, a phenomenon in which AI confuses two people who share a similar or identical name. In those cases, the system often defaults to the better-known or more frequently indexed individual.
For example, an author releasing a new book may share a name with a well-known actor. When someone searches that name, AI may highlight the actorās biography, credits, and interviews, while the author and their work remain effectively invisible unless a user knows additional details to narrow the search.
āWhen all someone knows is your first and last name, AI tends to default to the most famous or most saturated version of that identity,ā Patzer said. āYour name alone used to be enough for people to find you. In an AI-filtered world, that is no longer guaranteed.ā
Patzerās AI Reality Check⢠diagnostic incorporates the AI Suggestibility Scoreā¢, an Identity Collision Risk Scoreā¢, and other proprietary measures to show professionals how AI currently interprets them and whether the system is likely to recommend them, ignore them, or confuse them with someone else. The framework is designed for doctors, executives, authors, consultants, and other experts whose work depends on being accurately recognized and surfaced in digital environments.
āExperts assume that because they exist, they are visible,ā Patzer said. āWhat we are seeing in 2025 is that visibility is no longer automatic. It has to be engineered.ā
Dr. Patzer describes her work as AI Identity Engineeringā¢, an emerging discipline that brings together AI behavior, journalism ethics, and digital trust. Her FirstAnswer Authority System⢠is built to help the right experts become the first answer AI delivers in their field, while aligning with the identity and integrity standards promoted by leading journalism and media organizations.
About Dr. Tamara Patzer
Dr. Tamara āTamiā Patzer is a Pulitzer Prizeānominated journalist and the founder of AI Identity Engineering⢠and the FirstAnswer Authority Systemā¢. Her work sits at the intersection of AI visibility, expert verification, and journalism ethics. She is the creator of the AI Reality Checkā¢, Identity Collisionā¢, the AI Suggestibility Scoreā¢, and a suite of visibility metrics designed for professionals, corporations, and institutions that depend on accurate digital recognition.
LinkedIn: https://www.linkedin.com/in/tamarapatzer/

