As I’ve talked about in a previous blog post, a primary goal that drives our work at SDS is to achieve a 90%+ selection process accuracy rate. In other words, we want more than 90% of the people who are hired through our tools to be considered “good” hires.

In order for us to determine whether we have met that goal, we need information about how well these people are performing on the job. However, collecting information about people’s on-the-job performance can get tricky. It’s not always easy to determine with confidence who is a “good” hire and who is not. There are many questions we ask ourselves which help us to trust our performance evaluation data:

  • What types of information do we use to determine whether or not someone was a “good” hire?
  • Whose input do we consider to be the most useful?
  • How is this information collected?

The answers to these questions are critical factors which help us when determining if our selection process is resulting in “good” hires. I’m going to focus this blog on explaining some of the most critical factors we consider when collecting performance ratings.


When supervisors are asked to make evaluations on their people, the context in which these ratings are gathered can dramatically influence the true accuracy of the evaluations. For example, if a supervisor is asked to make evaluations that will be used to determine pay and promotion, the supervisor may rate someone differently than if they were evaluating a candidate for purposes of determining future training. In either of these cases, there are motivations that affect how a supervisor evaluates their subordinates that may have little to do with making precise, accurate evaluations. These motivations may include factors such as maintaining the morale of the supervisor’s team, motivating people to improve in the future, or ensuring subordinates receive pay increases.

Because of these motivations, we prefer to not use ratings obtained within an organization’s performance appraisal system. Instead, we prefer to collect our own ratings with a survey that is ONLY used for purposes of evaluating the selection process. We let evaluators know that their ratings of each individual will not be seen within the organization, and will be used for research purposes only…which ultimately leads to continuous improvement in their selection process. By doing this, it allows supervisors to focus on making accurate evaluations of their people without having to worry about the implications of these ratings.


For many jobs, people are evaluated by objective criteria, at least in part. For example, a production worker in an assembly plant may be evaluated by number of products assembled; a police officer may be evaluated by number of tickets written; or a car salesman may be evaluated by number of cars sold. On the surface, these criteria seem very reasonable. However, as Borman points out, there are several reasons why objective criteria often do not accurately reflect someone’s true performance on the job.

First, objective criteria may only reflect a small part of one’s job (e.g. the number of tickets written by police officer). Second, when using objective criteria, someone’s performance is often dependent on factors that are outside of his/her control (e.g. production worker is dependent on many others when assembling products). Finally, the numbers obtained from objective criteria might be difficult to evaluate. For example, a car salesman at one store might sell the same number of cars as a salesman at another store, but the market for cars at these stores could be very different.

Ultimately, when using objective criteria as a performance measure, you need to diligently research how these measures are obtained in order to ensure that the information truly reflects the person’s performance.


When collecting performance ratings, the supervisor is often considered to be the person who is in the best position to make these evaluations. However, there is often value in collecting ratings from sources in addition to the supervisor. As many jobs are complex in nature and include working with people at all levels of an organization, ratings from sources such as peers, subordinates, customers, etc. might capture information about someone’s performance that isn’t reflected in supervisor-only evaluations. By collecting ratings from multiple perspectives, a more comprehensive portrayal of employee performance can be obtained.

Clearly, there are many issues to consider when collecting performance information. While the purpose of this article was to introduce some of the most common issues that we often wrestle with, we realize that each organization and each job has its own factors that need to be considered. As such, we realize that there is no perfect measure of how someone is performing, which in turn can make it difficult to determine who is a “good” hire. What we do know, through experience, is that following best practices for collecting performance ratings can greatly enhance the accuracy of this information.