Using Reviewr to setup an efficient and engaging online application review workflow
In this webinar we discuss the key elements within Reviewr that are used to setup both the most efficient, and engaging, online application review workflows for scholarships, grants, recognition awards, and more.
In this webinar we discuss:
Common methods for assigning applicants to review committees
Blinding of data
Giving the review teams access to submissions
Collecting online evaluations and note taking
Team deliberations
Assigning applicants to review teams
You’ve launched your website, collected online applications, and now its time to review them – easy right? Not so fast. How do you create a review workflow that is both efficient for your review committee thats also fair, equitable, compliant, and engaging? How do we ensure that the review committee has all the content they need to review, without overwhelming them. Let’s dive into the most common workflows in assigning applications to review teams.
Option 1 – Assigning all applications to all review members
Simple approach
Can assign all applicants to all reviewers in bulk
Can manually adjust if needed
Potentially overwhelming to committee if there is a large number of applications
Option 2 – “Bucket method”
This theory is based off a concept of review commitee groups. Examples include:
Categories
Phases
Workflows
Locations
Review teams
This option allows you to segment applications into more manageable groupings.
Allows each group to have its own review team
Each group can also. have a “manager”.
Option 3 – Hybrid approach
Leverage buckets with advanced pairing options
Randomization
Manual Pairings within buckets
Also works for progression based reviews.
Each bucket is a “phase”
Each phase can use its own review team and evaluation critera
Top ranked applicants in a bucket can advance to other buckets
Blinding Data
Blinding of applicant data from review teams has become increasingly popular with Reviewr customers. Often times this is day for 2 reasons.
To ensure a fair, equitable, review process
Hiding info from review teams that does not impact their review process (not overwhelming them with irrelevant info).
What Data can be blinded?
Form data
Can hide specific form fields from review teams. Examples include:
Name
Address
Email
Demographic info
PII
Other form field data that can be hidden includes open ended text fields. However, this type of blinding would hide the entire text box, not just the PII within the text. If a customer wants to redact content written within a text field they themselves have the ability to edit the submission form.
Uploads
While content WITHIN an upload can not be blinded (without downloaded, blinding, and reuploading), the actual files themselves can be hidden.
This is also often hidden from the applicants as well if the files are submitted externally, such as a guest attachment for a reference letter.
User interface
Within Reviewr there are user interfaces and columns for things such as name, company, etc. These columns also need hidden from review teams view to ensure compliance in blinding.
Data within the evaluation
It is common practice for the actual review evaluation and/or notes to be shared back to applicants as a feedback sharing mechanism.
If this is desired, the name of the person that left the evaluation can be blinded.
The evaluation itself can have certain elements blinded, but not others. For example, share the notes but not the scores, or vice versa
Granting access and conducting reviews
Submissions have been assigned to review teams. Check. Data has been blinded. Check. Now its time to begin reviewing.
First, let’s ensure that the review team has been added, and assigned.
Review teams can be added individually or uploaded
Once added, the review team then gets assigned to a review bucket (see item #1 in this blog).
Once the review team is added, and assigned, it’s time to trigger credentials.
Invite them to register
Assign them a temp password
Once the review team logs in, they are presented with a “menu” of submissions to review.
Having an online review process is critical to ensuring both an efficient, and compliant, workflow. The use of tools like Reviewr not only streamline operations, but it eliminates a large percentage of human error when it comes to tabulating results and collecting evaluations.
Scorecards are a powerful tool to leverage math based scoring for fairness.
Simple scorecards
Weighted scoring
Averages
Note taking
Log internal notes to other users as a way to “virtually deliberate”.
Leave external questions and comments directly to applicants if desired.
Deliberation
Sometimes scores do not tell the full story.
Leverage Reviewr advancement user interface to discuss total scores, average scores, and view the data as a team.
Let the scores and notes power internal discussions and team deliberations.
Subscribe To Our Newsletter
Get insights and best practices from leaders in submission management and review.
Managing ARPA grants (or any grant for that matter), can be an overwhelming experience. Fret not, that’s why Reviewr exists. With nearly a decade of experience powering grants, scholarships, fellowships, awards, etc – Reviewr has both the tried and tested online platform to power these programs, but also a war-chest of knowledge and best practices. Today, we are going to look at the three main elements that make up an ARPA grant (or any grant) and how to manage them.
Reviewr is excited to introduce the release of a new, and incredibly powerful new feature: Supplemental forms. Supplemental forms open a new world for how applicant data is collected, stored, and accessed. So what are supplemental forms? At its core, it allows applicants within Reviewr to create a “profile” which contains information about them. From here, you can now leverage supplemental forms that allow applicants, or guests (teaser), to add additional forms to their profile. This allows accessing information about the applicant, and their supplemental material, both incredibly robust and user friendly.
Interested in seeing how Reviewr can work for you?