For grantmakers looking to stop running their programs in spreadsheets, email threads, and best guesses.
A free three-part series on how grant programs actually run
For grantmakers who want to stop fixing problems downstream that should have been solved upstream — and stop running their programs in spreadsheets, email threads, and best guesses.
Most grantmaking advice you'll find online is strategic. It's about vision, equity, trust-based philanthropy, the future of giving. Important conversations — but they don't help you on Monday morning when proposals are coming in, your panel is overwhelmed, and your board is asking what last year's grant cycle actually accomplished.
This series is about Monday morning.
Three sessions. Each one focused on a specific stage of the grant lifecycle. Each one designed to leave you with concrete operational moves you can apply to your next cycle. No theory. No vision decks. Just the operational playbook from grantmakers who have figured out how to run cycles that respect grantees, protect program officers, and produce data their boards and donors actually want to see.
Eligibility, LOIs, and Proposal Design
A great grant program does two things at intake: it makes applying possible for the right organizations, and it makes sure those organizations are aligned with the right funding opportunity before they invest weeks of capacity in a full proposal. Most programs are unintentionally working against both.
A typical full grant proposal takes 30 to 60 hours of nonprofit staff time to prepare. When that proposal goes to a funder whose priorities the organization doesn't actually fit, that's lost capacity an under-resourced nonprofit can't get back. Smaller, leaner organizations — often the ones doing the most direct community work — opt out of grant cycles entirely because they can't afford to gamble that kind of time on a long shot.
On the other side of the form, your program officers are paying for it too. Proposals from organizations outside your geography. Proposals from organizations whose mission doesn't actually match your funding strategy. Proposals missing key documents that should have been required at submission. Hours of program officer time, every cycle, spent disqualifying or chasing down what the application should have handled automatically.
This session covers both sides — and shows you how to fix them at the same time:
Walk away with: a checklist of every eligibility filter you should be running upstream, a framework for designing LOI and full proposal workflows that respect grantee capacity, and a live look at how Reviewr handles auto-matching across multiple funding programs and cycles under one application platform.
Assignment, Panel Review, and Defensible Scoring
Most grant programs spend a lot of energy thinking about who sits on the review panel. Almost none spend the same energy on how proposals get assigned to those panelists — and that's where funding decision integrity actually lives.
Two reviewers, same scorecard, same proposal pool. One scores everything between 7 and 9. The other scores everything between 4 and 6. Whose proposals get funded? In most programs, the answer is "the ones assigned to the first reviewer" — not because their proposals were stronger, but because no one corrected for the difference.
That's just one of the silent integrity problems most panels have. Sequential review queues that compound reviewer fatigue. Identical proposal order across panelists that compounds anchoring bias. Panel discussions where the loudest voice anchors consensus. External subject-matter experts mismatched to the proposals they're scoring. Scoring rubrics that look rigorous on paper but produce wildly different results across the panel.
This is the session that takes grant review from "however the panel feels today" to a defensible, consistent, auditable funding decision:
Walk away with: a clear picture of what randomization, shuffling, and normalization actually do — and why they matter for grant decisions, a framework for cutting panelist time per proposal without losing decision quality, a live look at side-by-side scoring and AI pre-scoring inside Reviewr, and direct answers on the questions your panel and board will ask before adopting any of it.
Disbursement, Reporting, and Portfolio Impact
Most grant platforms — and most grant teams — go quiet the moment awards are announced. That's the exact moment your grant's most important work begins, and where grantmaking diverges most sharply from any other application-based program.
A grant isn't a one-time event. It's a multi-year relationship with a disbursement schedule, milestone obligations, interim and final reporting requirements, compliance checkpoints, and impact measurement at the end of it all. Every stage you don't have a workflow for becomes a manual lift — chasing down financial reports over email, tracking tranched disbursements in spreadsheets, scheduling site visits in a calendar that nobody else can see, collecting narrative reports as PDFs that never get aggregated into anything.
When the board asks what last year's portfolio accomplished, the answer becomes vague. "We awarded $4.2 million across 38 grants." That's an output, not an outcome. The story your grant program could tell — the projects completed, the people served, the community outcomes, the grantee voices, the portfolio-level impact through-line — sits in fragments across email threads and one-off folders, never quite assembled into the report your funders, board, and stakeholders are actually asking for.
The other quiet failure: reporting burden on grantees. Funders ask for the same data in different formats across overlapping reports, often pulling exhausted nonprofit staff away from the work they were funded to do. Done well, post-award workflows reduce that burden. Done badly, they add to it.
This is the session that closes the loop — and the one most platforms can't help you with:
Walk away with: a post-award workflow checklist covering every stage from grant agreement through final impact reporting, a clear framework for what data to capture when (and why timing matters), a live look at how Reviewr handles disbursement tracking, milestone reporting, and portfolio-level impact measurement, and a reusable structure for the annual portfolio report your board, donors, and funders actually want to read.