Refuse AI notes

Refuse AI notes

Many medical offices are integrating AI-mediated note taking. In a different world where LLMs prioritized accuracy and practices weren’t squeezed for every dime of profit, this wouldn’t necessarily be a bad thing. That’s not our world.

Most offices are experimenting with AI, and will ask your permission. This protects them from liability. Head off problems by refusing to be a guinea pig.

medical AI
Image is a headline from an AI journal, documenting common issues with note-taking in medical settings.

Why we do it:

Our medical system is collapsing as we strip public funding, corporations squeeze practices for profit, and insurance companies take their pound from the middle. Protecting our own health is critical to our ability to resist, and to community health.

Medical note-taking mistakes aren’t new. I know someone diagnosed with diabetes during pregnancy based on weight, despite an entirely normal A1C. She her third trimester fighting the effects of bias to avoid a “high risk delivery” designation. Another friend has a specious cardiac diagnosis that pops up periodically. No one knows where it came from. Every time it seems to be gone, it comes back.

Unfortunately AI note-taking is making this more prevalent. Errors are so common articles exist to train medical staff to identify and correct the most common ones. Imaginary additions will be particularly catastrophic if the GOP manages to reverse ACA guarantees, as seems likely. Missing or mis-filed data can be dangerous, too.

AI has valid uses in medicine. Pattern recognition has been improving radiology since the 80’s. Unfortunately Large Language Models (LLMs) developed to meet one standard: the Turing test. This thought experiment posited that machine intelligence would be real when a person could not distinguish a transcript with AI from a real human. We expect humans to lie, make mistakes, or weave fantasy, so by default LLMs are not trained to avoid these behaviors. We also expect computers to be correct, so tend to over-trust AI results, including LLM summaries and analysis.

Until we have protocols that prioritize accuracy, guarantee time for doctor review adjacent to the visit (not in “spare time” or at the end of the day) and provide a pathway for correcting mistakes, allowing your medical records to be compiled by tools built to act like the average human is risky.

Protect yourself from this grand corporate experiment by asking your medical professionals if they use AI note-taking aids. If they do, ask them not to. Tell them you understand they are overworked, but if they don’t have time to take notes during the visit, you are skeptical they will be given sufficient time to review the notes later.

Or simply say no. No is a complete sentence.

Scroll to Top