Alumni awards programs exist at nearly every institution, yet many operate the same way they have for decades. Applications are collected in spreadsheets, reviewed over email chains, and announced without ever measuring whether the effort moved the needle on alumni engagement. The result is a program that consumes significant staff and volunteer time while leaving its strategic value on the table.
The University of Tennessee Knoxville took a different approach. Over the past several years, Jordan Brantley-Prewitt and the UT Alumni Relations team have reimagined their awards programs as strategic engagement engines—designed not just to celebrate alumni accomplishment, but to enrich alumni data, activate younger demographics, and measurably increase engagement scores across the alumni lifecycle.
This article breaks down the full picture: the strategic design behind UT’s award categories, the workflows that power them, the transformation from manual processes to a centralized platform, and the measurable results they’ve achieved—with practical takeaways for alumni professionals looking to build or elevate their own recognition programs.
One of the most common questions across the alumni relations space is: how do we start to engage and measure alumni engagement scores using our awards programs? UT has become the benchmark.
UT runs two distinct award programs, each serving a different strategic purpose within the broader alumni engagement framework.
The first is the UT Knoxville Alumni Awards, which includes four categories: Distinguished Alumni, Professional Achievement, Alumni Service, and Alumni Promise. These categories have been well established within the university system for years. More recently, they’ve been enhanced to be more comprehensive and strategically aligned with engagement goals.
The four categories are designed to ensure that alumni at every stage of their career and relationship with the university can be recognized—from emerging leaders (Promise) to those who’ve achieved exceptional career distinction (Distinguished Alumni). The Alumni Awards use a nominator-completed process, where a peer or colleague fills out the full nomination on behalf of the nominee.
The population targeted by Alumni Awards is fundamentally different from the 40 Under 40. These are well-known, established alumni—many with significant gift capacity that hasn’t been fully realized.
The second is the Volunteer 40 Under 40, a standalone program that recognizes rising alumni leaders under 40 who are giving back through volunteer service and community engagement. This program was created after UT benchmarked aspirational institutions including UCLA, LSU, the University of Florida, and Penn State.
The separation from the main Alumni Awards is intentional. The 40 Under 40 was born from a clear data insight: UT’s youngest and most recent alumni were the least engaged demographic. The program was designed specifically to re-engage that population.
“For 40 Under 40, we found that our youngest, our new and recent alums were the least engaged of our alumni demographic. Everybody likes to be recognized. Everybody loves an award and everybody likes to get a slap on the back for the work that they’ve done. A lot of those folks are people who were not really on our radar.”
— Jordan Brantley-Prewitt, University of Tennessee
Unlike the Alumni Awards, the 40 Under 40 is a self-nomination program. Applicants complete their entire application themselves. There’s a key psychological insight behind this design: people are much more willing—and more thorough—when talking about themselves than when filling out information on behalf of someone else.
Perhaps the most strategic element of UT’s program design is how the two awards connect. The 40 Under 40 functions as a feeder into the broader Alumni Awards program, creating a multi-year recognition trajectory that keeps alumni engaged over time. Alumni recognized in their early 30s through the 40 Under 40 often go on to receive Alumni Awards several years later.
“The Volunteer 40 Under 40 program, we utilize that as a launch and a catalyst into our Alumni Awards program. You see a lot of repeat award winners that will be recognized maybe in their early 30s, mid-30s in that Volunteer 40 Under 40 program, and then a couple of years later, they’re getting recognized as an Alumni Award recipient. So we use that as keeping them in the pipeline and trying to move them forward in recognition.”
— Jordan Brantley-Prewitt, University of Tennessee
UT learned an important lesson in their first year. The inaugural 40 Under 40 cohort was asked to commit to a scholarship endowment as part of their award. The response was overwhelmingly negative.
“The first year that we launched 40 Under 40, we started with a scholarship endowment that we asked each award winner to commit to, and it just wasn’t well received. They didn’t like the fact that they were being acknowledged for an award and then being turned around almost immediately and asked for a gift.”
— Jordan Brantley-Prewitt, University of Tennessee
UT pivoted. They learned that young alumni invest differently—their time and talent rather than their treasure. The team refocused on connecting winners back to their individual colleges, letting development officers and college contacts maintain the relationship. The end result for 40 Under 40 became getting young alumni back in the pipeline and then disseminating them to college units for ongoing relationship management.
This is a critical lesson for any institution designing a young alumni award: don’t conflate recognition with solicitation. The giving often follows naturally, but it comes through time, board service, and deepening engagement—not through an immediate ask.
UT also runs a third program called Rocky Top Business Awards, which recognizes the fastest-growing alumni businesses. All university-wide awards are managed through the central Alumni Relations office, but several individual colleges—including the Haslam College of Business and the College of Arts and Sciences—also run their own college-specific recognition programs.
The interplay between centralized and college-level programs creates a multiplier effect. Award winners identified through the central office are shared with colleges, who then incorporate them into their own pipelines.
“A lot of those people are identified from the colleges after they win something from our office. They will come in and say, oh, we didn’t know that Kyle won 40 Under 40 from the alumni office. He wasn’t even on our radar for this Haslam College of Business Award. So then they’ll take that person and put them in their pipeline.”
— Jordan Brantley-Prewitt, University of Tennessee
Behind UT’s award programs is a set of deliberate workflow choices that maximize both participation and data quality. These aren’t accidental—each element was refined through years of iteration, informed by applicant feedback, volunteer experience, and measurable outcomes.
For the 40 Under 40, UT uses a dual workflow where a nominator submits a brief nomination, and the nominee is then automatically notified to complete their own full application. This is one of the most impactful workflow decisions UT has made, and it serves multiple purposes simultaneously.
First, it lowers the barrier for nominators. They no longer need to know every detail about the person they’re nominating—they simply start the process. The nominee then owns the rest. In traditional peer nomination models where the nominator fills out everything, self-applications will always be stronger because the applicant knows more about themselves. The dual workflow eliminates that disparity.
Second, it creates a powerful engagement touchpoint. The moment a nomination is submitted, the nominee receives an automated notification. Even before they complete a single field, they’ve been touched by the alumni office in a meaningful way.
“The way our nominee portion is triggered is that if I go in and I’m nominating Kyle to apply for 40 Under 40, he’s going to get that email notification that says, hey Kyle, you were nominated for this award—which then obviously eyebrows go up a little bit. Oh my gosh, someone thinks enough of me to nominate me for this.”
— Jordan Brantley-Prewitt, University of Tennessee
Third—and this is where the strategic value becomes enormous—it transforms the awards process into a data enrichment engine. Nominees who begin their application immediately provide updated personal and professional information that flows directly back to UT’s CRM. They don’t even have to complete the full application for UT to benefit.
“They don’t have to submit a nomination for us to get the data. Anytime someone goes and starts an application, they are now triggered in our system as they have engaged with an email. And then we are able to pull their engagement from the application and update all of their alumni information in our system.”
— Jordan Brantley-Prewitt, University of Tennessee
Reviewr’s data across 50+ alumni associations shows clear metrics on this: even the act of being nominated—regardless of whether the nominee wins—has a measurable positive impact on long-term engagement and commitment to participating in other alumni initiatives.
The awards process also dramatically improves data quality compared to traditional outreach. Getting alumni to update their information through standard email campaigns is notoriously difficult—low open rates, low click rates, and minimal response. The awards process flips that dynamic entirely.
“Getting alumni to update their information is very difficult. People opening just standard emails from your alumni office may not be as great of a click rate, much less a reading rate of actually getting to the end of the email. But people are much more willing to share about themselves in this type of environment. They’re going to maximize the amount of information they’re putting in there because they want to win this award.”
— Jordan Brantley-Prewitt, University of Tennessee
The data being updated is meaningful: job titles, current employers, addresses, and social media handles. For young alumni especially, this is often the first time their records have been updated since graduation.
“We see a lot of updated information, especially in relation to job titles. A lot of them haven’t updated their information since they told us what that first job was that they landed post-commencement, but haven’t gone back in to share any updates since then. This is another great time where we’re able to maybe identify some people that we need to make a more concerted effort to communicate with based on the industry that they work in or whatever their job title is.”
— Jordan Brantley-Prewitt, University of Tennessee
UT is also collecting social media handles (Twitter/X, Instagram) through the application. Winners are tagged in recognition posts from the university-wide and college-specific social accounts, creating reshare opportunities and further extending the engagement touchpoint.
Rather than wasting applicant and staff time on ineligible submissions, UT’s workflow includes automated eligibility checks at the point of entry. For the 40 Under 40, the criteria are straightforward (age requirement), but the concept extends to more complex rules. Reviewr can cross-reference eligibility requirements in real time and redirect ineligible applicants—even suggesting alternative programs they may qualify for. This keeps the applicant pool clean and protects the review committee’s time.
For the Alumni Awards program, which has four distinct categories, incoming nominations are automatically sorted into the appropriate award category based on the nominator’s selection. This eliminates manual categorization work for staff and creates a natural framework for assigning entries to dedicated review teams—each committee focuses on a single category rather than evaluating the entire pool.
UT takes the integrity of their review process seriously. All volunteer reviewers are required to complete a bias training before they begin evaluating candidates.
“I created a bias training that all of our volunteers are required to go through before they can evaluate candidates. Just kind of going over implicit and explicit bias and understanding what biases may come into play when reviewing applicants. They also are required to sit down with our directors of the programs and they will go through and share with them what the highest-level impact things they are looking for from candidates.”
— Jordan Brantley-Prewitt, University of Tennessee
Demographic information such as gender, age, race, and ethnicity is often redacted from the materials reviewers see, ensuring that scoring focuses on merit and accomplishment. Different levels of information access are given to different reviewer groups—UT’s Alumni Board of Directors sees different data than the Young Alumni Council members or Special Interest and Diversity Council members who review the 40 Under 40.
The 40 Under 40 uses a two-round review process. An initial review committee cuts the applicant pool roughly in half, and a second group narrows the field to the final 40. This structure reduces individual reviewer burden while creating a more rigorous selection.
UT also applies an internal equity lens after scoring is complete. If the top 40 are heavily concentrated in one or two colleges (Business and Engineering tend to dominate), the team identifies the highest-scoring applicant from underrepresented colleges—like Nursing or Social Work—to ensure broad representation. This isn’t visible to reviewers; it’s handled internally as a final calibration step.
Rather than subjective yes/no decisions, UT uses rubric-based scoring aligned with their program goals. One effective approach is emotional response-style scoring—where reviewers react to questions on a simple scale rather than trying to justify the difference between a 16 and a 17 on a 20-point scale. The system aggregates scores into leaderboards that the team can use for deliberation meetings or to advance candidates to the next phase.
UT also changed the format of how applicants submit their information based directly on reviewer feedback. Long paragraph-form responses were slowing reviewers down, so the team shifted to a bullet-point format.
“We actually changed the structure of how the applicant is submitting their information because when they write in long paragraph form, it gets very hard to just see the quick, give me the quick and dirty of what they’re trying to share. That was feedback that we got from our reviewers who just said, it’s taking way too long for us to review these paragraphs. Can you have them break it down? And that was a good shift for us as well.”
— Jordan Brantley-Prewitt, University of Tennessee
Beyond these core workflows, there are several advanced capabilities that mature awards programs can leverage:
When reviewers only evaluate a subset of applicants (rather than the full pool), individual scoring tendencies can skew results. A consistently tough grader penalizes everyone assigned to them. Reviewr’s normalization algorithms benchmark each reviewer against their own personal average, ensuring that a “10” from a tough grader and a “40” from a lenient grader are compared on equal footing.
Judge Queuing & Workload Balancing:
Reviewr data shows that reviewer energy and scoring consistency decline dramatically after reviewing 30–50 applications. Distributing entries evenly and keeping workloads manageable is essential for fair outcomes.
Randomizing the order in which reviewers see applicants helps eliminate position bias—where the first and last entries reviewed are scored on different internal scales.
A brand-new Reviewr capability that takes the nominee’s submission, supporting documents, resumes, and references and generates concise summaries aligned with the program’s scorecard criteria. This can reduce a 15-minute review to five minutes per applicant and help with early-stage vetting of large applicant pools.
Rather than uploading traditional reference letters, references are invited to complete structured templates directly in a dedicated portal. This lowers the barrier, improves consistency, and gives the administering team visibility into reference progress.
Removing identifying information to prevent bias and ensure that evaluation is based purely on merit and content quality.
One of the most striking aspects of UT’s approach is their commitment to ongoing refinement. They’ve adjusted reviewer assignments (after a disastrous first year where every reviewer evaluated all 500 applicants), changed application formats, refined communication strategies, and restructured timelines—all based on data and stakeholder feedback.
“I don’t think there’s a single year where we’ve come back and said, copy and paste it, don’t change a thing. We’ve always come in and said, oh, we got this feedback from applicants, let’s tweak this. It’s very customizable to what the user needs and to what the alumni team needs, and that’s what’s been most beneficial for us.”
— Jordan Brantley-Prewitt, University of Tennessee
The dual nominator → nominee workflow simultaneously increases participation, improves data quality, and creates an engagement touchpoint.
Every nominee interaction—even an incomplete application—generates valuable data for your CRM.
Invest in your review process: bias training, data redaction, structured scoring, and manageable workloads protect program integrity.
Collect social media handles and use them for recognition posts that extend engagement beyond the ceremony.
Adjust application formats based on reviewer feedback—bullet-point responses over long paragraphs.
Iterate every year. The best programs are never finished.
UT ran their first year of the Volunteer 40 Under 40 without Reviewr. The experience was eye-opening enough to convince them they needed a fundamentally different approach.
Before Reviewr, UT’s awards process was managed through a combination of Google Drive, Excel spreadsheets, and manual email communication. Application data was siloed, difficult to access, and impossible to report on in real time. The review process required every single reviewer to evaluate every single applicant—with nearly 500 submissions in the first year, that was an unsustainable burden.
Reference letters and supporting documents had to be manually compiled into packets for reviewers. Score aggregation happened in spreadsheets. Communication with applicants—reminders, status updates, results—was all manual. The administrative burden consumed time that should have been spent on relationship building and strategic program management.
After discovering Reviewr following their first year, UT implemented the platform and has never looked back. The transformation touched every aspect of the program.
"This has saved us so much time. It is wild. Going from how we used to do this in a Google Drive to collecting information and putting everything in Excel spreadsheets and having that data that’s just so easily accessible—it has hands-down saved us infinite amounts of hours to work in this way."
— Jordan Brantley-Prewitt, University of Tennessee
With Reviewr, the entire awards operation is centralized. Nominations, nominee applications, reference materials, reviewer assignments, scoring, communication, and reporting all happen in a single platform. Real-time dashboards give the team instant visibility into application status, completion rates, reviewer progress, and deadline tracking without manually monitoring anything.
Tasks that previously consumed days of manual work—compiling application packets, sending individual reminders, distributing materials to reviewers, aggregating scores—now happen automatically. The team can see at a glance who has started but not completed an application, which references are outstanding, and how reviewers are progressing through their assignments.
Reviewr’s profile system aggregates each nominee’s full history—submissions, file uploads, communication records, nominator information—into a single view. This is especially valuable for programs like the 40 Under 40, where an applicant who wasn’t selected in their first year might reapply and reviewers can see their growth trajectory over time.
The streamlined digital process significantly improved the experience for every participant. Automated notifications keep nominators and nominees informed throughout the process. The lower barrier in the dual workflow model has increased participation. And critically, every interaction is tracked and communicated.
UT communicates with every participant regardless of outcome. Winners receive celebration communications. Non-winners receive outreach about other ways to engage with the university—and are encouraged to reapply, with the knowledge that multi-year applicants often receive extra attention from reviewers.
“Even people who are not winning—when they will reach out sometimes and say, I’m curious why I wasn’t selected—often it’s because they have not remained engaged with the university post-commencement. And so it’s a great opportunity for us to say, this was an area of your score that was just maybe not as high as some of our other applicants. Here are some ways that you can change that. And we see a lot of people take that second option.”
— Jordan Brantley-Prewitt, University of Tennessee
Repeat applicants also benefit from historical visibility. Reviewers can see that someone has applied multiple times and track their growth, which often works in the applicant’s favor.
“We reiterate to them that you have a better chance of winning the more times that you apply. Our reviewers can see that, and maybe they’re giving a little bit of extra attention to someone who’s applied three times. They can go back and say, where were they at in 2023, 2024, 2025? Wow, they’ve had a lot of growth. They’re probably ready to receive this award.”
— Jordan Brantley-Prewitt, University of Tennessee
The reviewer experience saw one of the most dramatic improvements. By structuring committee assignments so reviewers evaluate a manageable subset of applicants rather than the full pool, and by embedding all application materials directly in the review interface, UT eliminated the download-spreadsheet-review-upload cycle that previously consumed hours of volunteer time.
Reviewers can score applicants side-by-side with their full application profile—submission form, nominations, supporting documents, and references—all viewable without leaving the platform. Files are embedded directly rather than requiring downloads. The team can monitor scoring patterns in real time to identify outlier scores or inconsistencies.
There’s a data point that often goes unappreciated: the average hourly rate for a volunteer is approximately $34. When volunteers are asked to review large numbers of submissions through inefficient processes, institutions are spending $600–$800 per volunteer just on the judging process. Streamlining the review experience isn’t just about satisfaction—it’s about stewarding volunteer resources responsibly.
UT developed a sophisticated follow-up strategy to maximize application completion rates. Starting two to three months before the deadline, applicants who have started but not completed their applications receive personalized emails every two weeks. In the final month, that cadence increases to weekly.
“We will often pull those email addresses, transition them into our formal email account from the Office of Alumni Relations and send them that. We see a lot of people go back into Reviewr. When it comes from our office, if they are authenticated through our CRM, it’s going to hit their inbox, and we know that because we can see their click rates and their open rates.”
— Jordan Brantley-Prewitt, University of Tennessee
The team uses both Reviewr’s built-in automated reminders and their own office-branded email communications—finding that emails from the official Office of Alumni Relations have higher delivery and open rates due to CRM authentication. For the Alumni Awards, it’s specifically the supporting documents and letters of recommendation that lag behind, not the core application information.
When we found Reviewr, we implemented that into our second year. And so from there forward, we’ve just never gone back. We’ve never changed from there.
— Jordan Brantley-Prewitt, University of Tennessee
Centralizing the entire awards process in one platform eliminates fragmented tools and manual coordination.
Real-time dashboards replace manual tracking of application status, reviewer progress, and reference completion.
Embedded document viewing and side-by-side scoring dramatically reduce volunteer time on task.
Automated and manual reminders working together significantly boost application completion rates.
Historical applicant profiles enable multi-year tracking and give returning applicants a better chance.
Communicating with non-winners about alternative engagement opportunities turns rejection into re-engagement.
The strategic core of UT’s approach—and the reason this program has drawn attention across the alumni relations community—is its measurable impact on alumni engagement scores. This isn’t a program that exists in isolation. Every element feeds back into UT’s broader engagement framework.
UT’s alumni engagement scoring model factors in multiple dimensions, including email engagement and direct engagement with the alumni office. The awards process contributes to both—and it does so across the entire participant population, not just winners.
When a nominee interacts with an awards application—opening the notification email, starting their application, completing fields—each touchpoint registers as engagement in the CRM, boosting their email engagement score. When they participate further—attending events, communicating with the office, winning an award—that direct engagement elevates their score even more.
It has definitely improved our alumni engagement, especially from a data portion. We have seen a greater impact on having accurate and up-to-date data, but also moving our alumni through their engagement scoring process. Our engagement scores for alumni are dictated on many different factors within our foundation, but this moves a lot of people forward in that engagement process.
— Jordan Brantley-Prewitt, University of Tennessee
One of the clearest downstream metrics is email engagement. Award participants—winners especially, but also nominees—become significantly more likely to open and engage with subsequent emails from the alumni office.
“They’re more inclined to open emails that come from our office after receiving an award, because they’ve been re-engaged to the university, but because they don’t want to miss another opportunity, or maybe they’re looking for an opportunity to nominate someone the year after that they won. So we see a lot of engagement on that side for sure.”
— Jordan Brantley-Prewitt, University of Tennessee
UT doesn’t stop at the engagement score. The team runs longitudinal data reports to track the post-award trajectory of recipients—comparing gift capacity and giving amounts before and after winning an award, measuring continued engagement one, two, and three years post-recognition, and identifying patterns that inform future program design.
“Our CRM tracks awards that people are receiving from our office, and so we can say, let’s pull this report on Kyle. We saw that he won 40 Under 40 in 2023. Prior to that, his gift capacity and his amount was this, and now 2025, it’s this. And we’re able to take that data, break it down, and share that story of how much more engagement we’re seeing from these award winners one year post-winning, two years post-winning, three years post-winning.”
— Jordan Brantley-Prewitt, University of Tennessee
This longitudinal data is what transforms awards from a “feel-good” initiative into a demonstrable ROI program. When a team can show leadership that award recipients give more, volunteer more, and engage more in the years following recognition, it justifies continued—and expanded—investment in the programs.
UT also uses award data to proactively grow their pipeline. When the Alumni Awards program needs more nominations, they pull the historical list of 40 Under 40 winners and target that group specifically.
“When we’re struggling to maybe see some applicants for our Alumni Awards program, we’ll go pull all the winners from 40 Under 40 from the time we created the program and say, let’s target that group specifically and say, are you interested in applying for our Alumni Awards program? It has allowed us to open up a lot of different doors for the alumni engagement piece.”
— Jordan Brantley-Prewitt, University of Tennessee
Award winners don’t just receive recognition and disappear. UT has built a deliberate system for extending the impact of each award.
College and unit distribution:
Winner information is shared with individual colleges and development officers, who incorporate those alumni into their own engagement pipelines. This often surfaces alumni who weren’t previously on a college’s radar.
Social media amplification:
Winners receive a complete social media toolkit—headshots, branded frames, pre-written copy—to share their recognition on personal platforms. They’re tagged in posts from the university and college accounts for reshare opportunities.
Event integration:
Award recognition is tied to major campus events—Alumni Award recipients are recognized during football season, 40 Under 40 recipients during basketball and baseball seasons—maximizing visibility and creating on-campus experiences for winners.
Cross-pollination:
Winners are often subsequently identified for college-specific awards, board positions, and advisory roles—creating a cascading cycle of recognition and engagement
One emerging best practice gaining traction across alumni associations is keeping nominations open year-round rather than limiting them to a fixed application window. UT opens their application and nomination process shortly following each award ceremony—when momentum and visibility are highest.
An even more aggressive approach is a fully always-on model: a 24/7 open call for nominations with dedicated marketing pushes and review cycles at set points throughout the year. This prevents losing potential nominees who are inspired to participate at moments when applications are technically closed, and it keeps the data enrichment engine running continuously.
Awards participation—nominating, being nominated, applying, winning—all contribute to alumni engagement scores.
Track post-award giving, volunteering, and engagement longitudinally to demonstrate program ROI.
Use past award winners as a targeted pipeline for future programs and higher-tier recognition.
Distribute winner information to colleges and development officers to multiply the engagement impact.
Provide social media toolkits to winners for organic amplification of recognition.
Consider keeping nominations open year-round to capture engagement momentum at any time.
Award recipients become more responsive to future email communications—plan your outreach accordingly.
If you’re building an alumni awards program from scratch, there are three non-negotiable foundations to get right:
“College and unit buy-in is crucial. We rely heavily on our development team and our units and colleges to nominate or share information with us. It is very much a you scratch our back, we’ll scratch yours scenario. And that takes a lot of collaboration and communication with those colleges and units to get their buy-in.”
— Jordan Brantley-Prewitt, University of Tennessee
Back-to-back program timelines that don’t give staff adequate time for each cycle are a recipe for burnout. UT strategically separates their programs by season—Alumni Awards in fall (coinciding with football), 40 Under 40 in spring (coinciding with basketball and baseball)—to manage workload and maximize visibility.
“It has to be people who have a high investment of care and investment in your organization. When we identified people that were just kind of ho-hum volunteers, one foot in, one foot out, it slowed our entire process down. We train all of our volunteers, we log them in with Reviewr and share our screens and walk them through step by step. This is how you score an applicant. This is what you should be looking at.”
— Jordan Brantley-Prewitt, University of Tennessee
Being upfront about the time commitment is essential. Encouraging reviewers to evaluate applicants as they come in—rather than waiting for the full pool to build up—distributes the workload and prevents last-minute crunches.
“I promise it can be done, and I promise it's efficient. Once you get there, it moves and it benefits the organization greatly.”
— Jordan Brantley-Prewitt, University of Tennessee