Has AI Changed Your Proposal’s Audience?

The first rule of writing? “Write to your audience!” For scientific and medical grant proposals, that audience comprises our human scientist and stakeholder peers. Or does it?

When discussing grant strategy, I always start with the basic concept of knowing and understanding your audience, the peer reviewer, a human. Peer review has a long tradition in academia and is generally considered to be an important service to the scientific community.


For this reason, it never occurred to me that reviewers would use generative AI to produce their review critiques. As well, funding agencies clearly state their guidelines and expectations for the confidentiality of the review process. Funders clearly forbid discussion of applications outside of the sanctioned, official review meeting in the presence of officials. Funders require all application materials shared with the reviewers or produced by the reviewer in the service of the process (ie, notes)—print and digital—be destroyed immediately after the meeting (usually a bin is provided for reviewers to securely discard all copies of the materials and their notes as they leave the meeting room). This attention to confidentiality protects proprietary information and data and, without this guarantee, few scientists would submit research proposals.

Conversely, nothing is confidential about generative AI apps. They are trained on data and continue to learn by using data provided by users in an ongoing fashion. Users access this shared data when prompting the app, so submitting all or even part of a research application as a prompt shares that information—words, data, everything—with the generative AI app and all its users. It’s a breach of confidentiality. This also applies to locally hosted generative AI technology (eg, apps trained on a company’s data that can only be access by employees), because the information could be accessed by other individuals using that technology.

Without the guarantee of confidentiality, the integrity of the peer review process falls apart. Without a complete lack of confidentiality, robust generative AI models do not exist.

J.K. Byram

Use of generative AI to write a review critique also breaches trust, calling into question the integrity of the reviewer. Peer reviewers are invited by funders to serve in this role based on their experience and expertise. It is that combination of experience and expertise that provides the reviewer with the ability to be forward looking, to understand what is not just feasible, but what is also innovative and novel. Generative AI does not have that ability, it works by learning from data defining what has already happened, what is in the past. It uses statistical probability, word by word, to generate text. It is not sentient, it cannot thoughtfully “speak” to emerging thoughts, ideas, and technologies, and its conjectures are hallucinations, not the considered reflection of an experienced professional.

Write for Humans

While there may be some bad apple humans who will cut corners and use generative AI to produce review critiques, they are likely to be few and far between. When writing your grant proposal, it remains best practice to write for an audience of human peers, recruited to service based on their experience, expertise, and ability to identify what is feasible, novel, and innovative.

Learn More

NIH prohibits the use of generative AI to write review critiques, and most funders in biomedical research follow the NIH’s lead on issues of policy. For more information, see the recent blog post, guide notice, and full FAQ set from NIH. Always consult the policies of the funder to whom you plan to apply.

Leave a Reply