Webinar
Upcoming

The Grantmakers Masterclass

For grantmakers looking to stop running their programs in spreadsheets, email threads, and best guesses.

event
May 5, 2026
schedule
11:00 am

The Grants Operations Masterclass

A free three-part series on how grant programs actually run

For grantmakers who want to stop fixing problems downstream that should have been solved upstream — and stop running their programs in spreadsheets, email threads, and best guesses.

Why this series exists

Most grantmaking advice you'll find online is strategic. It's about vision, equity, trust-based philanthropy, the future of giving. Important conversations — but they don't help you on Monday morning when proposals are coming in, your panel is overwhelmed, and your board is asking what last year's grant cycle actually accomplished.

This series is about Monday morning.

Three sessions. Each one focused on a specific stage of the grant lifecycle. Each one designed to leave you with concrete operational moves you can apply to your next cycle. No theory. No vision decks. Just the operational playbook from grantmakers who have figured out how to run cycles that respect grantees, protect program officers, and produce data their boards and donors actually want to see.

Session 1 — Setting Up the Front End

Eligibility, LOIs, and Proposal Design

A great grant program does two things at intake: it makes applying possible for the right organizations, and it makes sure those organizations are aligned with the right funding opportunity before they invest weeks of capacity in a full proposal. Most programs are unintentionally working against both.

A typical full grant proposal takes 30 to 60 hours of nonprofit staff time to prepare. When that proposal goes to a funder whose priorities the organization doesn't actually fit, that's lost capacity an under-resourced nonprofit can't get back. Smaller, leaner organizations — often the ones doing the most direct community work — opt out of grant cycles entirely because they can't afford to gamble that kind of time on a long shot.

On the other side of the form, your program officers are paying for it too. Proposals from organizations outside your geography. Proposals from organizations whose mission doesn't actually match your funding strategy. Proposals missing key documents that should have been required at submission. Hours of program officer time, every cycle, spent disqualifying or chasing down what the application should have handled automatically.

This session covers both sides — and shows you how to fix them at the same time:

  • Letter of intent and concept paper workflows — how a lightweight LOI step lets organizations test alignment before committing to a full proposal, and lets you invite the right grantees forward without crushing the rest with rejection
  • Eligibility logic that does its job upstream — 501(c)(3) verification, geography, mission alignment, organizational size, prior grant history, financial standing checks that disqualify automatically before review
  • Auto-matching across funding programs — how one application can route organizations to the right funding opportunity, the right cycle, or the right initiative, eliminating manual triage by program officers
  • Proposal design that respects grantee capacity — question ordering, conditional logic, document upload workflows, autosave, and the principle that every question on your form has to earn its place against the cost of the grantee's time
  • What to ask now versus what to defer to post-award — the question-timing decisions that lift completion rates and protect grantee capacity without sacrificing decision quality

Walk away with: a checklist of every eligibility filter you should be running upstream, a framework for designing LOI and full proposal workflows that respect grantee capacity, and a live look at how Reviewr handles auto-matching across multiple funding programs and cycles under one application platform.

Session 2 — The Review Engine

Assignment, Panel Review, and Defensible Scoring

Most grant programs spend a lot of energy thinking about who sits on the review panel. Almost none spend the same energy on how proposals get assigned to those panelists — and that's where funding decision integrity actually lives.

Two reviewers, same scorecard, same proposal pool. One scores everything between 7 and 9. The other scores everything between 4 and 6. Whose proposals get funded? In most programs, the answer is "the ones assigned to the first reviewer" — not because their proposals were stronger, but because no one corrected for the difference.

That's just one of the silent integrity problems most panels have. Sequential review queues that compound reviewer fatigue. Identical proposal order across panelists that compounds anchoring bias. Panel discussions where the loudest voice anchors consensus. External subject-matter experts mismatched to the proposals they're scoring. Scoring rubrics that look rigorous on paper but produce wildly different results across the panel.

This is the session that takes grant review from "however the panel feels today" to a defensible, consistent, auditable funding decision:

  • Assignment and randomization — how to pair proposals with panelists in a way that protects against bias and fatigue compounding across the review group
  • Shuffled review order — why no two panelists should see the same proposals in the same sequence, and what that protects against
  • Score normalization — how to correct for panelists who score harder or softer than their peers, so your final rankings reflect the proposals themselves rather than the luck of which panelist they were assigned to
  • Panel review and discussion workflows — how to structure consensus discussion so the loudest voice doesn't anchor the outcome, with audit trails that satisfy compliance and board scrutiny
  • Side-by-side scoring — the comparison view that ends the slow, lonely, one-proposal-at-a-time review queue and cuts review time meaningfully
  • AI pre-scoring for proposals — what it does, what it doesn't do, and how to use it as a calibration tool that accelerates panelists without overriding human judgment. We address bias, fairness, data privacy, and human oversight directly.

Walk away with: a clear picture of what randomization, shuffling, and normalization actually do — and why they matter for grant decisions, a framework for cutting panelist time per proposal without losing decision quality, a live look at side-by-side scoring and AI pre-scoring inside Reviewr, and direct answers on the questions your panel and board will ask before adopting any of it.

Session 3 — After the Award

Disbursement, Reporting, and Portfolio Impact

Most grant platforms — and most grant teams — go quiet the moment awards are announced. That's the exact moment your grant's most important work begins, and where grantmaking diverges most sharply from any other application-based program.

A grant isn't a one-time event. It's a multi-year relationship with a disbursement schedule, milestone obligations, interim and final reporting requirements, compliance checkpoints, and impact measurement at the end of it all. Every stage you don't have a workflow for becomes a manual lift — chasing down financial reports over email, tracking tranched disbursements in spreadsheets, scheduling site visits in a calendar that nobody else can see, collecting narrative reports as PDFs that never get aggregated into anything.

When the board asks what last year's portfolio accomplished, the answer becomes vague. "We awarded $4.2 million across 38 grants." That's an output, not an outcome. The story your grant program could tell — the projects completed, the people served, the community outcomes, the grantee voices, the portfolio-level impact through-line — sits in fragments across email threads and one-off folders, never quite assembled into the report your funders, board, and stakeholders are actually asking for.

The other quiet failure: reporting burden on grantees. Funders ask for the same data in different formats across overlapping reports, often pulling exhausted nonprofit staff away from the work they were funded to do. Done well, post-award workflows reduce that burden. Done badly, they add to it.

This is the session that closes the loop — and the one most platforms can't help you with:

  • Grant agreements and award acceptance — what to capture, when, and how to make execution feel like a continuation of the relationship rather than a new bureaucratic process
  • Tranched and milestone-based disbursement — closing the gap between award and dollars-out-the-door, with audit trails that satisfy compliance, boards, and donor stewardship
  • Interim and final reporting workflows — narrative reports, financial reports, milestone check-ins, structured so grantees aren't asked to repeat themselves and your team can aggregate without manual effort
  • Compliance, site visits, and program officer check-ins — scheduling, documentation, and the continuity that turns transactional grantmaking into a real grantee relationship
  • Multi-year grants and renewal cycles — milestone verification, continued eligibility, and the workflow that turns annual renewal from a fire drill into a clean, structured process
  • Portfolio-level impact data — the grantee data you should be capturing now that becomes your annual report, your donor stewardship story, your board update, and your case for the next round of funding
  • The annual portfolio report your board and donors actually want to see — how to structure post-award data capture so the report writes itself

Walk away with: a post-award workflow checklist covering every stage from grant agreement through final impact reporting, a clear framework for what data to capture when (and why timing matters), a live look at how Reviewr handles disbursement tracking, milestone reporting, and portfolio-level impact measurement, and a reusable structure for the annual portfolio report your board, donors, and funders actually want to read.