Did AI Cost a Medical Student His Dream Residency
In October 2026, Chad Markey, a medical student from Dartmouth, found himself in a tough spot. Despite having excellent grades, strong recommendations, and a compelling personal story, he wasn’t getting interview invites for residency programs. Instead, he faced only rejections, which seemed suspicious given his qualifications. Markey’s story highlights how AI screening tools might be unfairly impacting job and residency applications today.
The Mystery Behind the Rejections
Markey spent months analyzing his application to find a reason for the rejection. His academic record and achievements were impressive, including publications in major medical journals. Still, he noticed something strange in his Medical Student Performance Evaluation (MSPE). The document stated he had taken three leaves of absence totaling 22 months and extended his third year for personal reasons. These details weren’t false—they reflected his diagnosis with ankylosing spondylitis, a serious autoimmune disease that affected his mobility and required time off for treatment.
He suspected that the language used in his MSPE might have triggered an AI screening tool used by some hospitals. Such tools are designed to filter applications automatically, but they can sometimes misinterpret nuanced or sensitive information. Markey believed that describing his absences as “voluntary” could have been seen negatively by an algorithm, unfairly lowering his chances of getting interviews. This suspicion led him to question whether AI might be responsible for his lack of opportunities.
The Rise of AI in Medical and Job Applications
AI screening tools are increasingly common in hiring processes, especially in competitive fields like medicine. Some hospitals use free AI tools to weed out applicants, but these systems are not perfect. They can misjudge personal circumstances or context, leading to unfair rejections. Markey’s case is just one example of how an AI might misinterpret honest explanations, damaging otherwise strong applications.
Many job seekers and professionals have expressed concerns about AI-driven screening. Some admit to trying to game the system by stuffing resumes with keywords. Others feel that their worth is reduced to how well they can optimize their applications for algorithms. Only a few states have laws regulating these tools, and even then, they often only require transparency, not fairness or accuracy. Applicants rarely have ways to know if an AI has discriminated against them or misread their stories.
Markey’s six-month effort to understand how the AI system might have judged him involved countless emails, research papers, legal requests, and Python coding. He aimed to peer inside the black box of AI decision-making, hoping to find a flaw or bias. His obsession reflected a broader frustration shared by many who believe AI tools can sometimes do more harm than good in the hiring process.
What Markey’s story reveals is a growing challenge as AI becomes a gatekeeper in important life opportunities. While these tools promise efficiency and objectivity, they also carry risks of bias and misjudgment. His experience underscores the need for better regulation, transparency, and understanding of AI’s role in critical decisions like medical residencies and jobs. As AI tools become more widespread, individuals, institutions, and lawmakers will need to work together to ensure fairness and prevent deserving candidates from being unfairly filtered out.












What do you think?
It is nice to know your opinion. Leave a comment.