Special Offer! Enjoy 58% OFF on the annual plan. Limited time only!

Special Offer! Enjoy 58% OFF on the annual plan. Limited time only!

Special Offer! Enjoy 58% OFF on the annual plan. Limited time only!

Funding Agency AI Policies: What Researchers Need to Know

Integrating generative AI into grant proposal development introduces complex new layers of research funding governance. Because competitive applications serve as highly confidential accounts of unpublished ideas, preliminary data, and institutional strategy, processing this material through algorithmic tools requires a deliberate approach to data security. Recognizing this shift, major funding bodies are formalizing policies that place AI use firmly at the center of research integrity, accountability, disclosure, and peer-review trust.

To mitigate these emerging risks, the National Institutes of Health (NIH) has issued strict guidance on fairness and originality in AI-assisted research applications. Similarly, the National Science Foundation (NSF) explicitly warns reviewers against uploading proposal materials into non-approved generative AI tools, while UKRI has set firm expectations for transparency among both applicants and assessors.

European guidance is evolving along the same trajectory, though it demands careful reading by specific programme and role. For instance, the Netherlands Enterprise Agency (RVO) advises Horizon Europe and Eureka applicants to meticulously weigh privacy, patent, and consortium risks before utilizing generative AI. Concurrently, the European Research Council (ERC) has adopted a clear non-delegation principle for grant evaluation, mandating that reviewers cannot rely on AI to summarize proposals, assess scientific merit, or draft evaluations. Furthermore, the Wellcome Trust has instituted explicit declaration requirements, compelling applicants to declare generative AI use unless it is strictly limited to basic language support.

Taken together, these expanding policies define a practical operational boundary: AI may support limited administrative facets of the writing and revision process, but authorship, scientific judgment, source verification, and confidentiality obligations rest unequivocally with the applicant. By examining the evolving frameworks of the NIH, NSF, UKRI, the ERC, and Wellcome, Principal Investigators, research managers, and grant teams can proactively construct responsible, auditable, and fully compliant submission workflows.

Can Researchers Use AI in Grant Applications?

Principal Investigators can generally leverage AI for limited administrative and structural support—such as syntax editing, outlining, translation, and clarity diagnostics. However, the PI retains absolute liability for the proposal’s scientific accuracy, originality, disclosure compliance, and data confidentiality

Policies differ by funder: 

  • NIH states that applications substantially developed by AI will not be considered original ideas of the applicant. 

  • NSF encourages proposers to indicate whether and how generative AI was used.

  • UKRI expects applicants to be transparent where they have used generative AI tools.

  • ERC prohibits reviewers from using AI to summarize proposals, assess scientific merit, or generate draft evaluations.

AI Use Case

Lower-Risk AI Assistance

Higher-Risk AI Substitution

Language and readability

Checking grammar, sentence clarity, spelling, or readability after the researcher has drafted the content.

Rewriting large sections in a way that changes the argument, claims, emphasis, or scientific interpretation.

Proposal structure

Reviewing whether headings, transitions, and section order are clear and aligned with the funder’s instructions.

Generating the structure of the research argument without the researcher first defining the aims, gap, methods, and contribution.

Research ideas

Brainstorming possible ways to phrase an already-defined idea, with the researcher retaining full judgment.

Asking AI to develop specific aims, hypotheses, novelty claims, research gaps, or methodological choices.

Evidence and citations

Creating a checklist of claims that need verification, or helping identify where citations may be missing.

Generating citations, literature claims, or evidence summaries without manual verification against original sources.

Confidential proposal material

Using approved tools and documented workflows for non-sensitive revision support, where allowed by the funder and institution.

Uploading unpublished data, patentable ideas, confidential budgets, consortium plans, or reviewer materials into unapproved AI tools.

Peer review

Limited language support may be permitted in some reviewer contexts only when no proposal content or personal data is shared.

Using AI to summarize a proposal, assess scientific merit, generate review comments, or replace expert judgment.

AI Assistance vs AI Substitution

For funding compliance, the central distinction is between AI assistance and AI substitution. Assistance supports researcher-led work. Substitution shifts authorship, reasoning, or evaluative judgment away from the applicant team.

AI assistance refers to bounded support for work the research team has already led. This may include grammar correction, readability review, translation, formatting, checklist creation, or structural feedback on a researcher-authored draft. In these cases, AI can help identify unclear wording, missing transitions, or organizational problems while leaving the proposal’s intellectual basis with the applicant team.

AI substitution occurs when the tool begins to generate the proposal’s intellectual substance. This includes asking AI to develop specific aims, formulate hypotheses, define novelty claims, design methodological frameworks, generate impact narratives, or make evaluation judgments. At that point, AI has moved from writing support into authorship and judgment.

This distinction matters because funding agencies award grants to researchers and research teams for their own ideas, methods, expertise, and judgment. AI can support the presentation of those ideas, but it should not become the source of the proposal’s central research claim. NIH makes this boundary especially visible by stating that applications substantially developed by AI, or sections substantially developed by AI, will not be considered original ideas of the applicant.

Applicant Rules Are Different From Reviewer Rules

Researchers also need to distinguish between applicant rules and reviewer rules because funders treat these roles differently.

Applicants often operate under conditional permission to use AI. Depending on the funder, AI may be used for limited support tasks, provided the applicant protects confidential information, verifies all outputs, follows disclosure requirements or expectations, and remains accountable for the final proposal. UKRI, for example, allows applicant use with caution but expects transparency where generative AI tools have been used, and warns applicants not to enter sensitive or personal data without formal consent.

Reviewers usually face stricter restrictions because they handle confidential proposal material that belongs to other researchers. NIH prohibits reviewers from using generative AI technologies to analyse or formulate peer-review critiques, and states that uploading grant application content or original concepts to online generative AI tools violates NIH confidentiality and integrity requirements. NSF similarly states that reviewers are prohibited from uploading proposal content, review information, or related records to non-approved generative AI tools.

ERC’s guidance makes the rationale especially clear: reviewers remain responsible for assessing proposals and writing reviews, and AI tools may not be used to summarize proposals, assess scientific merit, or generate draft evaluations. This is the non-delegation principle in practice. Applicants may be asking whether AI can help them prepare a stronger proposal, but reviewers are being told that proposal evaluation must remain a human expert task.

The Core Concern: Confidentiality, Originality, and Peer Review

Before parsing individual agency mandates, Principal Investigators should understand the systemic risks funders are actively mitigating: preserving research integrity, enforcing institutional accountability over intellectual property, and defending the sanctity of peer review against scalable algorithmic evaluation. 

Across NIH, UKRI, ERC, Horizon Europe-adjacent guidance, and other funder statements, three concerns appear repeatedly:

Policy Concern

Why It Matters in Grant Applications

Confidentiality

Proposals often contain unpublished data, sensitive institutional details, budgets, consortium plans, and commercially relevant ideas.

Originality

Funders award money for investigator-led research ideas, not proposals whose intellectual substance has been generated by a tool.

Peer Review Integrity

Reviewers are selected for their expertise and judgment, so proposal evaluation cannot be delegated to AI.

Confidentiality and Unpublished Research Content

Grant proposals are highly sensitive documents. When you submit an application, you are often sharing unpublished preliminary data, project aims, methods, institutional details, personnel information, budget assumptions, consortium strategies, and commercially relevant ideas. In some fields, especially biomedical, engineering, climate, and applied technology research, parts of a proposal may also relate to future intellectual property.

This is why uploading proposal material into a public or unapproved generative AI tool can create confidentiality and IP risks. You need to know what happens to the data you enter, who can access it, whether it can be retained, whether it can be used for training, and whether the tool’s data policies align with your institution’s requirements.

RVO’s Horizon Europe and Eureka guidance on AI in grant proposal writing makes this point directly. It advises applicants to ask which AI model sits behind a tool, what happens to the data entered and generated, who can use that data, and whether consortium partners or grant consultants are using AI without the applicant’s knowledge. The same guidance also warns that using generative AI on an idea you may later want to patent can create patent risks, because entering that information into an AI system may be treated as disclosure.

For researchers and research managers, AI use should be preceded by a data classification decision. Before entering proposal material into any tool, identify whether the content is confidential, unpublished, personal, commercially sensitive, consortium-related, or potentially patentable. Material in those categories should only be handled through approved systems and documented workflows.

Originality and Intellectual Ownership

Funding agencies award grants for investigator-led ideas. AI may support the expression, organization, or review of those ideas, but it should not generate the intellectual basis of the proposal.

This distinction matters most in sections where your authorship and judgment are central: the research gap, specific aims, hypotheses, study design, methodological rationale, innovation claim, expected contribution, and interpretation of prior work. These elements form the substance of the funding request.

NIH’s guidance on fairness and originality in research applications gives the clearest version of this boundary. NIH states that applications substantially developed by AI, or sections substantially developed by AI, will not be considered the original ideas of the applicant. The same notice warns that AI use may create risks such as plagiarism, fabricated citations, and other forms of research misconduct.

For researchers, the central question is whether you can claim intellectual ownership over the proposal’s ideas, claims, and logic. If AI helps you polish a paragraph you already wrote, that is different from asking it to define the research gap or generate your specific aims. The first is writing support. The second risks replacing researcher judgment.

Peer Review and Non-Delegation

Peer review is built on expert judgment. When you are invited to review a proposal, you are being asked to bring your disciplinary knowledge, methodological expertise, and ability to assess significance, feasibility, and originality. Those tasks cannot be outsourced to a generative AI system.

The ERC’s guidance on AI use in grant proposal evaluation gives the cleanest language for this principle: non-delegation. ERC states that reviewers may not use AI to summarize proposals, assess scientific merit, or generate draft evaluations. Reviewers may remain responsible for the final submitted review, but if AI has produced the summary, assessment, or evaluative reasoning, the core review task has already been delegated.

This principle also explains why reviewer rules are usually stricter than applicant rules. Applicants are working with their own proposal material, although they still need to protect confidentiality and originality. Reviewers are working with someone else’s unpublished ideas. Uploading that material into an external AI tool can breach confidentiality, and using AI to assess it can undermine the purpose of expert review.

The concern for funders is a gradual displacement of expert judgment. If applicants rely on AI-generated proposal text and reviewers rely on AI-generated summaries or evaluations, the funding process becomes less anchored in accountable human expertise. The emerging policy response is therefore not only about writing. It is about preserving the conditions that make competitive peer review legitimate: confidentiality, originality, expertise, and accountable judgment.

Major Funding Agency AI Policies Compared

AI policy for grant applications varies by funder, role, and funding programme. The practical risk for career researchers is assuming that one agency’s position applies across all submissions. Before using AI in a proposal, check the current guidance for the specific funder, call, institution, and role. The table below summarises the main policy positions researchers should understand before submission.

Funder

Applicant AI Use

Disclosure Position

Reviewer AI Use

Main Risk

NIH

Limited assistance may be acceptable. Substantial AI development affects originality.

Not framed as a broad disclosure rule in the AI notice.

Prohibited for peer-review critique.

Originality, misconduct, application cap.

NSF

Allowed with applicant responsibility for accuracy and authenticity.

Proposers are encouraged to indicate how generative AI was used.

Reviewers cannot upload proposal or review content to non-approved tools.

Authenticity, confidentiality.

UKRI

Allowed with caution.

Transparency expected where generative AI has been used.

Assessors must not use generative AI in assessment activities.

Sensitive data, bias, misconduct.

Horizon Europe-adjacent / RVO

Use with caution, source tracking, and attention to data policies.

Transparency emphasized in relevant guidance and forms.

Role and programme dependent.

Privacy, intellectual property, eligibility.

ERC

Applicants retain full authorship responsibility when using AI or external support.

Acknowledgment and responsibility emphasized.

Strict non-delegation and confidentiality rules for evaluators.

Reviewer delegation, confidentiality.

Wellcome

Use allowed with declaration expectations.

Declaration required except where AI is used to help with language.

Reviewer confidentiality emphasized through UK funder principles.

Disclosure, accountability.

NIH: Originality, Fairness, and Submission Limits

NIH guidance on fairness and originality in research applications gives a direct standard for AI-generated grant content: applications substantially developed by AI, or sections substantially developed by AI, will not be considered the applicant’s original ideas.

For Principal Investigators, the NIH policy dictates the following operational boundaries:

  • Originality: Substantially AI-developed applications or sections are treated as lacking applicant originality.

  • Misconduct risk: NIH warns that AI use may produce plagiarism, fabricated citations, or other forms of research misconduct.

  • Submission limit: NIH will accept no more than six new, renewal, resubmission, or revision applications from an individual PI, Programme Director, or Multiple PI in a calendar year, with some activity-code exceptions.

  • Peer review: In a separate notice, NIH prohibits peer reviewers from using generative AI to analyse applications or formulate critiques.

NSF: Disclosure Expectations and Merit Review Confidentiality

NSF permits AI use in proposal preparation while placing responsibility for the final submission on the applicant. You remain accountable for the accuracy, authenticity, and integrity of any proposal content developed with AI support.

NSF’s guidance includes three points researchers should track:

NSF has also updated its research misconduct language to include fabrication, falsification, or plagiarism committed through the use or assistance of AI-based tools in proposing, performing, reviewing, or reporting NSF-funded research. NSF’s PAPPG 24-1 Supplement 1 is the relevant source for that language.

UKRI: Transparency, Sensitive Data, and Assessor Restrictions

UKRI’s policy on generative AI in application preparation and assessment allows applicant use with caution. The policy focuses on transparency, data protection, bias, confidentiality, and research integrity.

For applicants, UKRI expects you to:

  • Use caution when entering information into generative AI tools.

  • Avoid entering sensitive or personal data without formal consent.

  • Check AI outputs for bias, falsification, fabrication, plagiarism, and misrepresentation.

  • Be transparent where generative AI tools have been used.

  • Avoid generative AI during interview stages where interviews form part of the application process.

For assessors, UKRI applies stricter rules. Assessors, reviewers, and panellists must not use generative AI tools as part of assessment activities, including for language, spelling, grammar, or formatting support.

Horizon Europe and RVO: Responsibility, Sources, Privacy, and IP

For Horizon Europe and Eureka proposals, AI guidance should be read at the level of the specific programme, call, and application form. RVO’s guidance translates AI-related risks into practical considerations about data handling, source tracking, consortium partners, grant consultants, and patent protection.

RVO advises you to check:

  • Which AI model sits behind the tool.

  • What happens to the data you enter and generate.

  • Who can access that data.

  • Whether training on your prompts and outputs can be disabled.

  • Whether consortium partners or grant consultants are using AI.

  • Whether the proposal includes commercially sensitive or patentable ideas.

RVO also warns that using generative AI on an idea you may later want to patent can create intellectual-property risk. For consortium proposals, this makes AI use a shared governance issue, not only an individual writing choice.

ERC: Human Evaluation and Non-Delegation

ERC distinguishes proposal preparation from proposal evaluation. The ERC Scientific Council’s position on AI states that researchers may seek input from AI technologies or human third parties for tasks such as brainstorming, literature search, revision, translation, or summarising text. The applicant retains full authorship responsibility for acknowledgments, plagiarism, and good scientific conduct.

For evaluation, ERC’s guidance on AI use in grant proposal evaluation applies a strict non-delegation principle. Reviewers may not use AI to:

  • Summarise proposals.

  • Assess scientific merit.

  • Generate draft evaluations.

  • Determine novelty or methodology quality.

  • Replace their own expert judgment.

ERC allows limited language polishing or general information search only when proposal content and personal data are not shared with the tool, and when the reviewer’s evaluative judgment remains their own.

Wellcome and UK Funders: Declaration and Cross-Funder Alignment

Wellcome’s policy on generative AI in the grant application process requires applicants to declare the use of generative AI tools when applying for grant funding, except where AI is used only to help with language.

Wellcome’s position sits within a wider UK funder discussion. The Research Funders Policy Group joint statement recognizes possible benefits of generative AI, including language support and accessibility, while emphasizing risks around rigor, transparency, originality, reliability, data protection, confidentiality, intellectual property, copyright, and bias.

The joint statement also gives a clear reviewer-side rule: peer reviewers must not input confidential application or review content into generative AI tools or use those tools to develop peer-review critiques or applicant responses.

What Funding Agency AI Policies Have in Common

The agency-specific rules differ, but the direction of travel is consistent. Funding bodies are building AI policies around five governance priorities: accountability, originality, disclosure, confidentiality, and human peer review.

Shared Principle

What You Need to Do

Accountability

Verify every claim, citation, budget detail, method, and compliance statement.

Originality

Keep the research idea, aims, hypotheses, methods, and novelty claims researcher-led.

Disclosure

Follow the funder’s rules for declaring or documenting AI use.

Confidentiality

Keep unpublished, sensitive, personal, patentable, and consortium material out of unapproved tools.

Human Peer Review

Treat proposal evaluation as expert human judgment.

The Applicant Remains Accountable

AI use leaves responsibility with you, your co-applicants, and your institution. This accountability covers:

  • Scientific claims

  • Citations and references

  • Budget details

  • Methodological choices

  • Eligibility statements

  • Ethics and compliance language

  • AI-assisted text included in the proposal

For NSF proposals, this responsibility includes content developed with generative AI support. NSF policy also includes fabrication, falsification, or plagiarism committed with AI-based tools within its research misconduct language.

Originality Is the Main Boundary

The main policy question is whether AI shaped the intellectual substance of the proposal.

Keep these elements researcher-authored:

  • Specific aims

  • Research questions

  • Hypotheses

  • Methodological design

  • Novelty claims

  • Significance and impact claims

  • Interpretation of preliminary data

NIH states that applications substantially developed by AI, or sections substantially developed by AI, will not be considered original ideas of the applicant. Use that standard as a practical boundary across funders.

Disclosure Is Becoming a Risk-Management Practice

Disclosure requirements differ by funder. Clear documentation makes AI use easier to explain, verify, and defend if questions arise.

Current funder positions include:

  • NSF encourages proposers to indicate whether and how generative AI was used.

  • UKRI expects transparency where applicants have used generative AI tools.

  • Wellcome requires applicants to declare generative AI use, except where AI is used to help with language.

For PIs and research managers, treat AI documentation like any other compliance record. Keep an internal AI-use record with:

  • Tool name

  • Date used

  • Task performed

  • Prompt type

  • Output used

  • Human revision made

  • Disclosure language, where relevant

Confidentiality Rules Are Tightening

Public AI tools can create confidentiality and intellectual-property risk when used on draft proposal material.

Use extra caution with:

  • Unpublished clinical data

  • Patient or participant information

  • Patentable ideas

  • Commercially sensitive methods

  • Budget or personnel details

  • Consortium strategy

  • Partner or institutional information

RVO’s guidance for Horizon Europe and Eureka applicants advises researchers to check what happens to data entered into AI tools, who can access it, whether training can be disabled, and whether partners or consultants are using AI. It also warns that entering patentable ideas into AI tools can create intellectual-property risks.

Peer Review Must Remain Human

Reviewer policies are stricter because reviewers receive confidential proposal content and make expert funding judgments.

Reviewer restrictions usually cover:

  • Uploading proposal content into unapproved AI tools

  • Summarizing confidential proposals with AI

  • Using AI to assess scientific merit

  • Using AI to generate draft evaluations

  • Replacing expert judgment with AI-generated reasoning

NIH prohibits reviewers from using generative AI to analyze applications or formulate critiques. ERC guidance states that reviewers may not use AI to summarize proposals, assess scientific merit, or generate draft evaluations. This is the non-delegation principle: proposal evaluation remains the work of human experts.

Policy and Misconduct Risks in AI-Assisted Grant Applications

The most credible AI-related risks in grant applications are evidentiary, procedural, and governance-related: who generated the idea, whether the claims are accurate, whether confidential material was exposed, whether AI use was documented, and whether expert judgment remained human-led.

Risk Area

Why It Matters

Practical Control

Substantial AI generation

AI-generated aims, hypotheses, methods, or novelty claims may violate originality expectations.

Keep the intellectual core researcher-authored.

Fabricated citations or false claims

AI can invent sources, misstate findings, or produce inaccurate policy details.

Verify every citation, claim, date, and statistic.

Confidentiality and IP exposure

Unpublished or patentable material entered into public AI tools may create privacy or IP risks.

Use approved tools only, and keep sensitive material out of public systems.

Undisclosed or poorly documented AI use

Disclosure expectations differ by funder, and poor records make compliance harder to defend.

Keep an internal AI-use log.

Overreliance on AI detection

Detection tools are imperfect and should not guide your workflow.

Use documented, limited, human-led AI support.

Substantial AI Generation

The highest-risk use case is asking AI to create the intellectual substance of the proposal. This includes specific aims, hypotheses, methods, novelty claims, research gaps, and impact arguments.

NIH states that applications substantially developed by AI, or sections substantially developed by AI, will not be considered original ideas of the applicant. That standard gives researchers a practical boundary: AI may support presentation, but the research logic should come from you and your team.

Use extra caution when AI has shaped:

  • Specific aims

  • Hypotheses

  • Study design

  • Methodological rationale

  • Innovation claims

  • Significance statements

  • Interpretation of preliminary data

Fabricated Citations and False Claims

Fabricated citations create a direct research-integrity risk because they are verifiable. A reviewer or administrator can check whether a reference exists, whether a study supports the claim attached to it, and whether a statistic has been reported accurately. In a competitive grant context, that kind of error affects credibility, not only compliance..

NIH warns that AI use may result in plagiarism, fabricated citations, or other forms of research misconduct. For NSF-funded work, fabrication, falsification, or plagiarism committed with the use or assistance of AI-based tools is included in NSF’s research misconduct language.

Before submission, verify:

  • Every citation exists.

  • Every cited source supports the claim.

  • Policy dates and eligibility rules are current.

  • Budget claims match the application instructions.

  • Preliminary data are described accurately.

  • AI-assisted summaries match the original sources.

Confidentiality and IP Breaches

Grant proposals often contain unpublished results, clinical data, institutional details, budgets, consortium strategy, and patentable ideas. Entering that material into an unapproved AI tool can create confidentiality and intellectual-property exposure.

RVO’s guidance for Horizon Europe and Eureka applicants advises researchers to check what happens to data entered into AI tools, who can access it, whether training on prompts and outputs can be disabled, and whether partners or consultants are using AI. RVO also warns that using generative AI on an idea you may later want to patent can create patent risk.

Reviewer rules show the same concern from the evaluation side. NIH prohibits reviewers from using generative AI to analyze applications or formulate critiques, and ERC prohibits reviewers from uploading proposals to external AI systems or using AI to summarize or assess them.

Undisclosed or Poorly Documented AI Use

Disclosure risk depends on the funder. Some policies require declaration, some expect transparency, and some encourage applicants to describe AI use.

Examples:

Keep a simple AI-use record with:

  • Tool name

  • Date used

  • Proposal section affected

  • Task performed

  • Type of prompt used

  • Output used or rejected

  • Human edits made

  • Disclosure language, where relevant

Overreliance on AI Detection Is Also Risky

AI detection is a weak foundation for research governance. Detection tools can produce false positives and inconsistent results, especially with technical, formulaic, or highly structured academic prose. A proposal workflow should be defensible because the ideas, evidence, and review process are well documented, not because the text is optimized to avoid a detector.

A defensible workflow starts with clear limits, documentation, and human review:

  • Use AI only for defined support tasks.

  • Keep the proposal’s intellectual substance researcher-authored.

  • Protect confidential and unpublished material.

  • Verify every citation and factual claim.

  • Document AI use.

  • Follow funder-specific disclosure rules.

  • Get human review before submission.

Lower-Risk and Higher-Risk AI Uses in Grant Writing

AI use in grant writing is lower risk when it supports researcher-led work, preserves confidential material, and leaves scientific judgment with the applicant team. Risk increases when AI generates intellectual substance, introduces unverifiable claims, or processes sensitive material outside approved systems.

Lower-Risk AI Uses

Lower-risk uses keep the research idea, analysis, and final judgment with you. They also avoid confidential, unpublished, personal, or patentable material unless the tool is institutionally approved.

Common lower-risk uses include:

Lower-Risk Use

Appropriate Use Case

Grammar and readability checks

Refining researcher-written text for clarity, concision, spelling, or flow.

Translation

Translating researcher-written text, provided the tool meets institutional privacy rules.

Headline or structure review

Checking whether headings, section order, and transitions match funder instructions.

Reviewer-style clarity questions

Asking where a human reviewer may find the rationale, methods, or impact unclear.

Checklist creation

Creating a pre-submission checklist from public funder guidance.

Formatting support

Checking formatting requirements, page limits, or required application components.

Summarizing public guidance

Summarizing public funder instructions, not confidential proposal content.

These uses still require verification. AI output should be treated as draft support, not as a source of record.

Higher-Risk AI Uses

Higher-risk uses affect authorship, originality, confidentiality, or compliance. These uses need stronger safeguards and may be prohibited by specific funders.

Higher-Risk Use

Why It Creates Risk

Generating aims, hypotheses, or research gaps

These define the intellectual substance of the proposal.

Generating methodology or novelty claims

These require disciplinary judgment and researcher accountability.

Drafting substantive sections from scratch

This may create originality concerns, especially under NIH’s AI policy.

Uploading confidential drafts to consumer AI tools

This can expose unpublished data, consortium plans, budgets, or IP-sensitive material.

Asking AI to assess scientific merit

Evaluation is a human expert task, especially in peer review.

Using AI-generated citations without checking them

Fabricated or inaccurate citations can create misconduct risk.

Using AI on patentable ideas

RVO warns this may create patent and disclosure risks.

Using AI during prohibited interview stages

UKRI prohibits generative AI use during interviews where interviews form part of the application process.

NIH states that applications substantially developed by AI will not be considered original ideas of the applicant. RVO’s Horizon Europe and Eureka guidance warns applicants to consider privacy, data access, and patent risks when using generative AI in proposal writing. UKRI prohibits generative AI use during interviews where interviews form part of the application process.

Grey-Zone Uses: Rewriting, Summarizing, and Structuring

Some AI uses depend on how the task is framed. The same task can be lower risk or higher risk depending on the input, output, and degree of researcher control.

Drafting Task

Lower-Risk Use

Higher-Risk Use

Policy Boundary

Rewriting

Refining language for clarity, tone, concision, or grammar.

Changing the argument, claims, evidence hierarchy, or interpretation.

Rewriting is lower risk when it stays at the language level.

Summarizing

Summarizing public literature, public policy documents, or funder instructions.

Summarizing unpublished proposal drafts, confidential data, or reviewer materials.

Risk depends on the confidentiality of the input.

Structuring

Organizing existing researcher-authored ideas into a clearer outline.

Asking AI to create the conceptual framework, aims, or logic of the proposal.

Structuring is lower risk when the ideas already come from you.

Reviewer-style feedback

Asking where a draft may be unclear, under-supported, or poorly sequenced.

Asking AI to judge scientific merit, novelty, or fundability.

Feedback is lower risk when it supports revision rather than replaces expert judgment.

A defensible rule is to strictly confine AI to the inspection, clarification, and structural support of researcher-authored drafts. The genesis of research ideas, the application of expert judgment, and the handling of confidential material must remain firmly under human and institutional control.

ERC’s evaluation guidance applies this boundary clearly for reviewers: AI may not be used to summarize proposals, assess scientific merit, or generate draft evaluations.

AI Disclosure in Grant Applications: What Researchers Should Document

AI disclosure rules differ by funder. Some funders require a declaration, some expect transparency, and some encourage applicants to describe how generative AI was used. Treat documentation as part of the grant file, especially when AI shaped drafting, translation, structure, review, or formatting.

Funder Position

Practical Meaning

Required declaration

Include a clear AI-use statement where the funder asks for one.

Transparency expected

Document AI use and disclose it in the format requested by the funder.

Disclosure encouraged

Keep a record and consider a concise statement if AI materially shaped the proposal.

No clear guidance

Follow institutional policy and keep an internal audit trail.

When to Disclose AI Use

Disclose AI use when the funder, programme, or institution asks for it. Also document AI use when the tool materially shaped the proposal workflow.

Use disclosure or internal documentation when:

  • The funder requires a declaration.

  • The funder expects transparency.

  • The application instructions request information about AI use.

  • AI supported drafting, translation, structure, review, or formatting in a material way.

  • AI was used by a grant consultant, consortium partner, or external editor.

  • AI helped generate language that appears in the final application.

Wellcome requires applicants to declare generative AI use when applying for grant funding, except where AI is used to help with language. 

UKRI expects applicants to be transparent where they have used generative AI tools. 

NSF encourages proposers to indicate whether generative AI was used and how it was used.

Always follow programme-specific instructions first. A call-specific rule can be more relevant than general agency guidance.

What to Include in an AI-Use Statement

A useful AI-use statement should be specific enough to clarify the role of AI and brief enough to avoid creating unnecessary ambiguity.

Include:

  • Tool name and version, if available.

  • Task performed, such as grammar review, translation, formatting, structural feedback, or checklist creation.

  • Proposal section affected, if relevant.

  • Data handling, including whether confidential, personal, unpublished, or patentable material was entered.

  • Verification process, including manual checking of claims, citations, sources, and policy details.

  • Authorship responsibility, confirming that the research ideas, claims, methodology, interpretation, and final submission remain the applicant’s responsibility.

Example wording to adapt where appropriate:

Generative AI was used for language refinement and structural review of researcher-authored text. No confidential, personal, unpublished, or patentable material was entered into public AI tools. The applicant team reviewed and verified all claims, citations, methods, and final wording. The research ideas, analysis, and final submission remain the responsibility of the applicants.

What to Avoid in an AI-Use Statement

Avoid vague or overbroad language. A disclosure should clarify the role of AI, not create uncertainty about authorship.

Avoid:

  • “AI was used to help write this proposal.”
    This does not explain the task, scope, or level of human control.

  • Language suggesting AI generated the scientific idea.
    Specific aims, hypotheses, methodology, novelty claims, and interpretation should remain researcher-authored.

  • Generic acknowledgments for substantive AI use.
    If AI materially shaped the proposal workflow, make the disclosure clear and easy to locate.

  • Claims you cannot document.
    Keep records of tool use, prompts, outputs, verification steps, and final human edits.

  • Unverified AI-generated citations or factual claims.
    Every source, statistic, policy detail, and eligibility claim should be checked against the original source before submission.

A Responsible AI Workflow for Grant Writers and PIs

A compliant workflow treats AI as a diagnostic partner rather than a ghostwriter. Establishing clear, project-level rules before drafting begins mitigates compliance risks and aligns your team with a defensible, human-led methodology. Enforce this standardized process across PIs, co-authors, research managers, external grant consultants, and consortium partners.

Step

Action

Output

1

Check funder and institutional policies

Confirmed AI rules for the call

2

Define permitted and prohibited AI tasks

Project-level AI use rule

3

Keep confidential content out of unapproved tools

Protected proposal data

4

Use AI for feedback

Researcher-led draft revision

5

Verify claims and citations

Checked evidence base

6

Maintain an audit trail

Documented AI use

7

Use human review

Final expert-led submission check

Step 1: Check Funder and Institutional Policies Before Drafting

Start with the funder’s current AI guidance. Then check the specific funding call, application form, and your institution’s research office policy.

Check:

  • Agency-level guidance

  • Programme-specific instructions

  • Institutional AI policy

  • Approved tool lists

  • Data protection requirements

  • Disclosure requirements

For example, NIH’s AI guidance on originality and fairness, UKRI’s generative AI policy for applications and assessment, and Wellcome’s policy on generative AI in grant applications set different expectations. Programme-specific instructions should guide the final workflow.

Step 2: Define Permitted and Prohibited AI Tasks

Create a short project-level AI rule before drafting starts. Share it with everyone involved in the application.

Include:

  • Permitted uses, such as grammar review, formatting, translation, checklist creation, and structural feedback

  • Prohibited uses, such as generating aims, hypotheses, methodology, novelty claims, or merit assessments

  • Approved tools

  • Data that cannot be entered into AI systems

  • Disclosure rules for the target funder

  • Documentation requirements

This matters most in multi-author and consortium proposals, where one person’s AI use can affect the compliance position, confidentiality, and disclosure record of the whole application.

Step 3: Keep Confidential Content Out of Unapproved Tools

Protect unpublished and sensitive material from the start. Public AI tools can create risk when used on:

  • Unpublished data

  • Personal or clinical information

  • Financial details

  • Patentable ideas

  • Commercially sensitive methods

  • Consortium strategy

  • Partner or institutional information

Use institutionally approved tools where available. If no approved tool exists, keep confidential material out of AI systems.

RVO’s Horizon Europe and Eureka guidance advises applicants to check what happens to data entered into AI tools, who can access it, whether training can be disabled, and whether partners or consultants are using AI. It also warns that entering patentable ideas into AI tools can create patent risk.

Step 4: Use AI for Feedback, Not Intellectual Substitution

Use AI to review the presentation of researcher-authored ideas. Keep the scientific substance with the research team.

Appropriate feedback uses include:

  • Identifying unclear claims

  • Flagging missing transitions

  • Checking section alignment

  • Finding unsupported statements

  • Reviewing whether aims, methods, and impact claims connect clearly

  • Turning reviewer-style questions into a revision checklist

Keep these researcher-authored:

  • Specific aims

  • Research gap

  • Hypotheses

  • Methodology

  • Novelty claim

  • Scientific rationale

  • Interpretation of preliminary data

Structured feedback belongs at the revision stage, after the research team has defined the aims, methods, evidence base, and argument. thesify’s Grant Assistant can support this workflow by helping you identify relevant grant opportunities, review fit against your researcher profile and funding preferences, and organize the next steps toward a stronger application. Used responsibly, this keeps the strategic funding decision and scientific judgment with the applicant team while giving PIs, group leaders, and research managers a clearer path from grant discovery to proposal development.

Step 5: Verify Every Claim, Citation, and Policy Detail

AI-assisted text still requires full verification. Check every claim against original sources before submission.

Verify:

  • Citations

  • Source accuracy

  • Policy dates

  • Eligibility rules

  • Budget rules

  • Funder terminology

  • Preliminary data descriptions

  • Claims about novelty, significance, and impact

NIH warns that AI use may produce plagiarism, fabricated citations, or other forms of research misconduct. Treat citation checking as a required part of any AI-assisted workflow.

Step 6: Maintain an Audit Trail

Keep a record of AI use during proposal development. This helps you prepare disclosure language and answer questions later.

Record:

  • Tool name

  • Tool version, where available

  • Date used

  • Proposal section affected

  • Task performed

  • Prompt type

  • Output used or rejected

  • Human edits made

  • Final disclosure language

A simple spreadsheet is enough. The goal is to show that AI supported feedback, formatting, or language review, while the research ideas, claims, and final submission remained under human control.

Step 7: Use Human Review Before Submission

Use human review as the final quality and compliance check.

Include:

  • PI review

  • Co-author review

  • Research office review

  • Budget or finance review

  • Ethics or data protection review, where relevant

  • Disciplinary peer review, where possible

The final review should confirm that the proposal is accurate, original, funder-compliant, and ready for expert assessment.

How thesify Supports Responsible Grant Proposal Revision

Funding agency AI policies make unrestricted AI text generation risky for grant applications. A responsible workflow keeps your research idea, funding strategy, evidence, and scientific judgment at the center of the proposal.

thesify’s Grant Assistant supports this workflow by helping you move from grant discovery to application planning. You can provide information about your research, funding preferences, and relevant documents, then review suggested grant opportunities that match your profile. This helps PIs, group leaders, and research managers assess fit before investing time in a full application.

For PIs and research managers, thesify makes the funding process more structured. It can help you:

  • Identify grant opportunities aligned with your research profile

  • Compare potential funding routes more efficiently

  • Organize application requirements and next steps

  • Clarify whether a call fits your project, career stage, and institutional context

  • Move from opportunity scanning to a more focused proposal workflow

For researchers working under evolving AI policies, the safest use of Grant Assistant is at the planning and discovery stage. It can help you identify relevant funding opportunities, assess fit, and organize next steps while the research idea, funder-fit decision, proposal argument, and final submission remain with the applicant team.

For practical next steps, see how to get grant proposal feedback with thesify and high-impact grant writing for 2026 review panels.

Pre-Submission Checklist for AI-Assisted Grant Applications

Before initiating institutional routing or final research-office approval, Principal Investigators should utilize this audit checklist to ensure their AI workflow remains documented, strictly funder-compliant, and unequivocally researcher-led. 

  1. Confirm the current funder policy. 

Review the latest AI guidance from the relevant agency, including NIH, NSF, UKRI, ERC, or Wellcome.

  1. Check programme-specific instructions. 

Review the exact call text, application form, applicant guidance, and funder FAQs. Programme-level rules can be more specific than agency-wide policy.

  1. Check institutional policy and approved tools. 

Confirm whether your university, hospital, research institute, or grants office has an AI policy, approved tool list, or data protection requirement.

  1. Define what AI was used for. 

Record whether AI supported grammar review, translation, formatting, checklist creation, structural feedback, or another defined task.

  1. Confirm researcher authorship of the proposal’s intellectual substance. 

Specific aims, hypotheses, methods, novelty claims, significance claims, and conclusions should remain authored by the research team.

  1. Verify every AI-assisted claim. 

Check technical statements, policy claims, eligibility details, budget language, and descriptions of preliminary data against authoritative sources.

  1. Check every citation and factual statement. 

Confirm that each reference exists, is current, and supports the claim attached to it.

  1. Protect confidential and sensitive material. 

Confirm that unpublished data, personal information, clinical details, budget information, consortium strategy, and patentable ideas stayed out of unapproved AI tools. RVO’s Horizon Europe and Eureka guidance is especially useful for checking privacy, data access, consultant use, and patent risk.

  1. Prepare an AI-use disclosure where required or appropriate. 

Use the funder’s preferred format. State the tool used, the task performed, the verification process, and the applicant’s responsibility for the final submission.

  1. Save an audit trail. 

Keep a record of tools, dates, prompt types, outputs used or rejected, human edits, and final disclosure language.

  1. Get human expert review before submission. 

Include PI review, co-author review, research office review, budget review, ethics or data protection review where relevant, and disciplinary peer review where possible.

Final Takeaway: AI Policy Is Now Part of Research Governance

The debate over artificial intelligence in grant writing has definitively transcended writing mechanics; it is now a foundational pillar of research governance. For Principal Investigators and research administrators, navigating this landscape requires acknowledging that algorithmic delegation poses direct threats to authentic authorship, data confidentiality, institutional liability, and the credibility of the peer-review process.

For PIs, research managers, and grant writers, the safest approach is documented, limited, human-led AI use. That means:

  • Keeping the research idea, aims, methods, novelty claims, and conclusions researcher-authored

  • Using AI only for defined support tasks, such as clarity checks, formatting, translation, or structural feedback

  • Protecting confidential, unpublished, personal, and patentable material

  • Verifying every citation, claim, policy detail, and budget statement

  • Disclosing AI use where required or expected

  • Keeping an audit trail of tools, prompts, outputs, and human revisions

  • Using expert human review before submission

Funding agency AI policies will continue to develop. Researchers who build transparent workflows now will be better positioned as disclosure rules, institutional compliance checks, and funder expectations become more formalized.

FAQs on AI in Grant Applications

Can Researchers Use AI in Grant Applications?

Yes, depending on the funder and the task. Limited AI assistance for editing, formatting, translation, or clarity checks may be acceptable. You remain responsible for the proposal’s originality, technical accuracy, required disclosures, and protection of confidential data.

Do Funders Require Researchers to Disclose AI Use?

Disclosure mandates vary. The NSF strongly encourages proposers to indicate how generative AI was used, while the Wellcome Trust requires applicants to declare generative AI use except where it was used to help with language. When guidance is unclear, keep an internal record, check programme-specific instructions, and follow the funder’s required format.

Can AI Write My Specific Aims or Research Strategy?

This is high risk. Specific aims, hypotheses, methodologies, and novelty claims should originate from the research team. NIH states that applications substantially developed by AI will not be considered original ideas of the applicant.

Can Peer Reviewers Use AI to Summarize Grant Applications?

Generally, no. Reviewers handle confidential proposal material, including unpublished ideas, methods, and preliminary findings.   ERC, NIH, and UKRI restrict reviewers from using AI to summarize proposals, assess scientific merit, or generate draft evaluations.

What Is the Biggest Risk of Using AI in Grant Writing?

The main risks are loss of originality, fabricated citations, inaccurate claims, confidentiality breaches, and failure to follow funder-specific disclosure rules. Fabricated references and false claims are especially damaging because they can be checked directly and undermine the credibility of the application..

Should Research Teams Keep an AI Audit Trail?

Yes. Keep an internal audit trail that records which tools were used, what tasks they supported, what outputs were used or rejected, and how the research team verified the final proposal. For PIs and research managers, this record helps align co-authors, consultants, and institutional review processes.

Can External Grant Consultants Use AI?

Potentially, but the PI and host institution remain responsible for the final application. Consultants should disclose which tools they use, avoid entering confidential content into public models, and follow the target funder’s AI policy.

Try thesify’s Grant Assistant for Free

Before your next funding deadline, use thesify’s Grant Assistant to find relevant grant opportunities, assess fit, and organize your next application steps. You keep control of the research idea, funding strategy, and final proposal while thesify helps make the grant planning process clearer.

Sign up for thesify for free and try the Grant Assistant before your next funding deadline.

Related Articles

  • AI Policies in Academic Publishing 2025 – What Authors Need to Know: Explore the latest AI policies from major publishers, compare their rules and follow our step‑by‑step checklist to ensure ethical, transparent use of AI in your research. Get a granular, evidence-based breakdown of each major publisher's specific policies, serving as a detailed reference for authors navigating the submission process.

  • How to Get Grant Proposal Feedback with thesify: Discover a practical workflow for obtaining grant proposal feedback through thesify. Upload your draft, analyse missing sections and iterate using Refresh and Export. If you have specific questions about the feedback, type them into the Ask follow‑up field in the Feedback tab. Theo can clarify why a particular section was flagged or suggest examples of strong impact statements.

  • Grant Writing Strategies: The 2026 Logic Audit Framework | thesify: In the current hyper-competitive funding climate, a "good" research proposal is no longer enough. Reviewers in 2026 are increasingly scanning for something beyond feasibility: they are looking for scientific rigor and absolute transparency.  Stop writing generic grant proposals. Learn how to survive the "triage" phase using a reviewer-first logic audit. Discover how thesify helps researchers stress-test their methodology for higher funding success rates.

Thesify enhances academic writing with detailed, constructive feedback, helping students and academics refine skills and improve their work.
Subscribe to our newsletter

Ⓒ Copyright 2025. All rights reserved.

Follow Us:
Thesify enhances academic writing with detailed, constructive feedback, helping students and academics refine skills and improve their work.

Ⓒ Copyright 2025. All rights reserved.

Follow Us:
Subscribe to our newsletter
Thesify enhances academic writing with detailed, constructive feedback, helping students and academics refine skills and improve their work.
Subscribe to our newsletter

Ⓒ Copyright 2025. All rights reserved.

Follow Us:

Special Offer! Enjoy 58% OFF on the annual plan. Limited time only!