top of page

    AI disclosure erodes trust

    ree

    As generative artificial intelligence increasingly soaks into the workplace, people grapple with the ethical implications of disclosing its use to others. Conventional wisdom suggests that transparency should boost trust, yet our work reveals the opposite can occur in the context of AI: openly admitting to AI assistance may significantly erode trust in its user.


    In our research, Oliver Schilke and I conducted thirteen experiments examining trust across various professional scenarios, including educational settings, investment advice, job applications, creative tasks, and routine corporate communications. Consistently, we found that individuals or organizations who openly disclosed their use of generative AI in their work were trusted significantly less compared to those who kept their AI usage under wraps.


    We call this the “transparency dilemma”, which arises because disclosure triggers perceptions of diminished legitimacy. According to our data, legitimacy—the perception that an entity’s actions are appropriate and socially acceptable—is crucial for trust. When people disclose their AI usage, it suggests a reduction of human input and judgment, which in turn causes evaluators to question the legitimacy of their actions and work.


    In one of our experiment, students evaluated a professor who disclosed using AI to grade assignments. Students expressed lower levels of trust compared to when the same professor either revealed a human teaching assistant would grade or made no disclosure at all. Similar effects occurred in other settings. For example, an investment fund that disclosed (vs. not disclosed) the use of AI for preparing financial information was invested in less. Similarly, a designer who disclosed the use of AI for creating a postcard design was less likely to be rehired for another design job. Or, a startup founder who disclosed that he wrote his CV with the help of AI was trusted less by his employees. Even minimal AI involvement, such as AI-driven proofreading, led to reduced trust.


    Intriguingly, we observed that trust eroded even when evaluators knew beforehand that AI was employed. The act of disclosure itself—and not merely the knowledge of AI’s use—prompted distrust. Additionally, voluntary disclosures and mandatory disclosures imposed by external regulations both resulted in lowered trust, which we think underscores the broad applicability of the transparency dilemma. Our work further revealed nuances regarding individual attitudes towards technology. Those with more positive attitudes towards technological advancements and who perceive AI as highly accurate showed less distrust toward disclosed AI usage, though the AI-disclosure effect was not muted by these factors. This suggests a deeply entrenched societal preference for human agency in the workplace.


    Yet, we found one condition where AI usage posed an even greater threat to trust: exposure of undisclosed AI usage by a third party. Being caught using AI covertly was even worse such that transparency might still be the lesser of two evils.


    Our findings challenge individual and organizational AI users to carefully consider transparency strategies regarding AI. While openness about AI use might be ethically commendable, it also demands careful handling to mitigate trust erosion.


    Reference:

    Schilke, Oliver and Martin Reimann (2025), “The transparency dilemma: How AI disclosure erodes trust,” Organizational Behavior and Human Decision Processes, in press.

    bottom of page