Close Menu
    Facebook X (Twitter) Instagram
    SciTechDaily
    • Biology
    • Chemistry
    • Earth
    • Health
    • Physics
    • Science
    • Space
    • Technology
    Facebook X (Twitter) Pinterest YouTube RSS
    SciTechDaily
    Home»Technology»How AI’s “Perfect” Emails Can Backfire With Your Team
    Technology

    How AI’s “Perfect” Emails Can Backfire With Your Team

    By University of FloridaAugust 18, 20252 Comments4 Mins Read
    Facebook Twitter Pinterest Telegram LinkedIn WhatsApp Email Reddit
    Share
    Facebook Twitter LinkedIn Pinterest Telegram Email Reddit
    Millennial Businesswoman Skeptical Distrust
    Polished emails may look great, but when managers lean too heavily on AI, employees often see the messages as fake. The study shows that trust takes a hit, even if the writing looks professional. Credit: Shutterstock

    AI tools like ChatGPT and Gemini are helping professionals write smoother, more polished emails—but new research reveals a hidden cost.

    A large survey of over 1,100 workers found that while AI assistance makes managers’ messages look more professional, it also erodes trust when overused. Employees are generally fine with light editing help, but once AI starts shaping the tone or entire message, supervisors come across as insincere, lazy, or less competent.

    AI in the Workplace: From Novelty to Norm

    More than 75% of professionals now rely on AI in their daily work, often using tools such as ChatGPT, Gemini, Copilot, or Claude to draft and polish emails. These tools are widely praised for making writing easier, but do they actually help when it comes to communication between managers and employees?

    A new survey of 1,100 professionals highlights an important contradiction in workplace communication. While AI can make a manager’s email appear polished and professional, frequent reliance on it can reduce employees’ trust in their supervisors.

    “We see a tension between perceptions of message quality and perceptions of the sender,” said Anthony Coman, Ph.D., a researcher at the University of Florida’s Warrington College of Business and study co-author. “Despite positive impressions of professionalism in AI-assisted writing, managers who use AI for routine communication tasks put their trustworthiness at risk when using medium- to high-levels of AI assistance.”

    Perceptions of AI Use in Emails

    Published in the International Journal of Business Communication, the study by Coman and co-author Peter Cardon, Ph.D., of the University of Southern California, asked professionals to evaluate emails they were told had been written with varying levels of AI help (low, medium, and high). Participants assessed not only the content of congratulatory messages but also their perception of the person who sent them.

    The Perception Gap Between Managers and Employees

    While AI-assisted writing was generally seen as efficient, effective, and professional, Coman and Cardon found a “perception gap” in messages that were written by managers versus those written by employees.

    “When people evaluate their own use of AI, they tend to rate their use similarly across low, medium and high levels of assistance,” Coman explained. “However, when rating other’s use, magnitude becomes important. Overall, professionals view their own AI use leniently, yet they are more skeptical of the same levels of assistance when used by supervisors.”

    The Tipping Point for Negative Perceptions

    While low levels of AI help, like grammar or editing, were generally acceptable, higher levels of assistance triggered negative perceptions. The perception gap is especially significant when employees perceive higher levels of AI writing, bringing into question the authorship, integrity, caring, and competency of their manager.

    The impact on trust was substantial: Only 40% to 52% of employees viewed supervisors as sincere when they used high levels of AI, compared to 83% for low-assistance messages. Similarly, while 95% found low-AI supervisor messages professional, this dropped to 69-73% when supervisors relied heavily on AI tools.

    When AI Feels Like Laziness

    The findings reveal employees can often detect AI-generated content and interpret its use as laziness or lack of caring. When supervisors rely heavily on AI for messages like team congratulations or motivational communications, employees perceive them as less sincere and question their leadership abilities.

    “In some cases, AI-assisted writing can undermine perceptions of traits linked to a supervisor’s trustworthiness,” Coman noted, specifically citing impacts on perceived ability and integrity, both key components of cognitive-based trust.

    Choosing the Right Messages for AI Assistance

    The study suggests managers should carefully consider message type, level of AI assistance and relational context before using AI in their writing. While AI may be appropriate and professionally received for informational or routine communications, like meeting reminders or factual announcements, relationship-oriented messages requiring empathy, praise, congratulations, motivation or personal feedback are better handled with minimal technological intervention.

    Reference: “Professionalism and Trustworthiness in AI-Assisted Workplace Writing: The Benefits and Drawbacks of Writing With AI” by Peter W. Cardon and Anthony W. Coman, 2025, International Journal of Business Communication.
    DOI: 10.1177/23294884251350599

    Never miss a breakthrough: Join the SciTechDaily newsletter.

    Artificial Intelligence ChatGPT Psychology University of Florida
    Share. Facebook Twitter Pinterest LinkedIn Email Reddit

    Related Articles

    Will Artificial Intelligence End Civilization?

    Misinformation Express: How Generative AI Models Like ChatGPT, DALL-E, and Midjourney May Distort Human Beliefs

    New Tool Detects ChatGPT-Generated Academic Text With 99% Accuracy

    Cancer and AI – Can ChatGPT Be Trusted?

    AI vs MD: ChatGPT Outperforms Physicians in Providing High-Quality, Empathetic Healthcare Advice

    Humans Reign Supreme: ChatGPT Falls Short on Accounting Exams

    New Study: ChatGPT Can Influence Users’ Moral Judgments

    ChatGPT Generative AI: USC Experts With Key Information You Should Know

    The Rise of Artificial Intelligence: ChatGPT’s Stunning Results on the US Medical Licensing Exam

    2 Comments

    1. Donavan E. Nickerson on August 18, 2025 6:29 am

      So…. is AI the enabler that will put more incompetents into positions of authority?

      Reply
    2. SAEID on August 28, 2025 10:52 pm

      HELLO SCIENTIFIC FRIENDS *
      Sometimes you need to remind AI, like humans, of things. I tried this. *
      best regards *

      Reply
    Leave A Reply Cancel Reply

    • Facebook
    • Twitter
    • Pinterest
    • YouTube

    Don't Miss a Discovery

    Subscribe for the Latest in Science & Tech!

    Trending News

    After 60 Years, Scientists Uncover Hidden Brain Pathway Behind Diabetes Drug Metformin

    Scientists Discover a Surprising New Way To Fight Diabetes

    The World’s First Human Hybrid? Ancient Fossil Stuns Scientists

    Researchers Uncover Cancer’s Secret Weapon Against Immune Cells

    Scientists Unlock Quantum Computing Power by Entangling Vibrations in a Single Atom

    Uranus Has a Tiny New Moon and It’s Only Six Miles Wide

    What If the Big Bang Wasn’t the Beginning? Supercomputers Search for Clues

    No Pills, No Surgery: Scientists Discover Simple Way To Relieve Arthritis Pain

    Follow SciTechDaily
    • Facebook
    • Twitter
    • YouTube
    • Pinterest
    • Newsletter
    • RSS
    SciTech News
    • Biology News
    • Chemistry News
    • Earth News
    • Health News
    • Physics News
    • Science News
    • Space News
    • Technology News
    Recent Posts
    • The End of Opioids? New Drug Could Change the Way We Treat Severe Pain
    • Study Finds 95% of Tested Beers Contain Toxic “Forever Chemicals”
    • High-Potency Cannabis Linked to Schizophrenia, Psychosis, and More, Review Finds
    • Each Winter, These Tiny Ocean Travelers Bury Millions of Tons of Carbon
    • Healing Ozone Layer Could Trigger 40% More Global Warming
    Copyright © 1998 - 2025 SciTechDaily. All Rights Reserved.
    • Science News
    • About
    • Contact
    • Editorial Board
    • Privacy Policy
    • Terms of Use

    Type above and press Enter to search. Press Esc to cancel.