New AI Suggestibility Score shows how artificial intelligence decides which experts to elevate

Published on December 11, 2025

The AI Suggestibility Scoreā„¢, developed by Dr. Tamara Patzer, measures how likely artificial intelligence is to identify and elevate a specific expert. The new metric emerges as AI systems become primary gatekeepers for visibility and professional discovery.

Artificial intelligence has become the primary gatekeeper for who appears as an expert online, who is considered credible, and who is quietly ignored. In response to this shift, Dr. Tamara ā€œTamiā€ Patzer has introduced the AI Suggestibility Scoreā„¢, a new metric designed to measure how likely it is that an AI system will select, elevate, and trust a specific professional.

The AI Suggestibility Scoreā„¢ is a central component of Patzer’s broader FirstAnswer Authority Systemā„¢, a framework that explains how modern AI models evaluate identity rather than just content.

ā€œAI no longer acts like a neutral index of information,ā€ Patzer said. ā€œIt evaluates identity. Suggestibility has become the new visibility. If AI does not find you suggestible, it does not select you, no matter how good your work is.ā€

The score examines machine-readable identity signals, cross-platform consistency, corroborated credentials, and trust patterns to determine whether AI systems are likely to treat a given expert as a reliable source.

The launch of the AI Suggestibility Scoreā„¢ comes at a time when journalism organizations and AI platforms are both focused on identity, verification, and trust. In 2025, the Poynter Institute, Columbia Journalism Review, Nieman Lab at Harvard, the International Fact-Checking Network, the American Press Institute, the Trust Project, the News Literacy Project, the Knight Foundation, the Reuters Institute for the Study of Journalism at Oxford, and UNESCO’s media integrity programs all highlighted the growing risk of identity confusion and misattribution in an AI-driven information ecosystem.

At the same time, major AI systems have increased their reliance on structured identity and authority signals. Search and conversational platforms now place more weight on which person or organization appears to be the most stable, visible, and corroborated entity associated with a name or topic.

ā€œJournalism is tightening its standards for identity and sourcing at the same time AI systems are tightening theirs,ā€ Patzer said. ā€œThe people who do not have a clear, machine-readable identity are the ones who disappear first.ā€

A key risk the AI Suggestibility Scoreā„¢ surfaces is what Patzer calls Identity Collisionā„¢, a phenomenon in which AI confuses two people who share a similar or identical name. In those cases, the system often defaults to the better-known or more frequently indexed individual.

For example, an author releasing a new book may share a name with a well-known actor. When someone searches that name, AI may highlight the actor’s biography, credits, and interviews, while the author and their work remain effectively invisible unless a user knows additional details to narrow the search.

ā€œWhen all someone knows is your first and last name, AI tends to default to the most famous or most saturated version of that identity,ā€ Patzer said. ā€œYour name alone used to be enough for people to find you. In an AI-filtered world, that is no longer guaranteed.ā€

Patzer’s AI Reality Checkā„¢ diagnostic incorporates the AI Suggestibility Scoreā„¢, an Identity Collision Risk Scoreā„¢, and other proprietary measures to show professionals how AI currently interprets them and whether the system is likely to recommend them, ignore them, or confuse them with someone else. The framework is designed for doctors, executives, authors, consultants, and other experts whose work depends on being accurately recognized and surfaced in digital environments.

ā€œExperts assume that because they exist, they are visible,ā€ Patzer said. ā€œWhat we are seeing in 2025 is that visibility is no longer automatic. It has to be engineered.ā€

Dr. Patzer describes her work as AI Identity Engineeringā„¢, an emerging discipline that brings together AI behavior, journalism ethics, and digital trust. Her FirstAnswer Authority Systemā„¢ is built to help the right experts become the first answer AI delivers in their field, while aligning with the identity and integrity standards promoted by leading journalism and media organizations.

About Dr. Tamara Patzer

Dr. Tamara ā€œTamiā€ Patzer is a Pulitzer Prize–nominated journalist and the founder of AI Identity Engineeringā„¢ and the FirstAnswer Authority Systemā„¢. Her work sits at the intersection of AI visibility, expert verification, and journalism ethics. She is the creator of the AI Reality Checkā„¢, Identity Collisionā„¢, the AI Suggestibility Scoreā„¢, and a suite of visibility metrics designed for professionals, corporations, and institutions that depend on accurate digital recognition.

LinkedIn: https://www.linkedin.com/in/tamarapatzer/

MEDIA CONTACT
Company Name: Daily Success Institute, TAMI LLC
Contact Person: Dr. Tamara Patzer
Email: Info@DailySuccessInstitute.com
Phone: 9414216563
Country: USA
Website: https://linkedin.com/in/tamarapatzer