sbirkeland

Home>Scott Birkeland
Scott Birkeland

About Scott Birkeland

Scott specializes in developing and implementing assessment programs to help organizations select and develop their workforce. With over 15 years of experience as an I/O Psychologist, Scott partners with clients to identify talent-related problems and design solutions to enhance the skill level in their organizations.

Sports Analytics Field Trip

By Scott Birkeland, Ph.D., Vice President of Stang Decision Systems

I recently attended the Midwest Sports Analytics Meeting in Pella, Iowa. Thanks to Russ Goodman and Central College for putting together fun and informative sessions. This conference not only gave me an excuse to hang out with my former college roommate (St. Thomas math professor Eric Rawdon), but it also allowed me to see, first-hand, some of the cutting-edge research that is taking place in the field of sports analytics.

scott1

Eric Rawdon and me at the Central College entrance (I’m on the left).

During the conference, I attended a variety of presentations. Some of the topics included measuring how teams deliver value to fans, an analysis of strike zone errors in MLB, ways to differentiate offensive explosiveness vs. offensive efficiency in college football, software that helps coaches create data-driven practice plans, and several talks that used NFL play-by-play information to analyze tactical decisions (e.g. win probability at various points in a game, fourth down decisions, field goal accuracy, etc.).

All very interesting and thought provoking topics (at least to me!).

As I reflect on the conference, one of the big takeaways I have relates to decision-making and why it is that those involved with sports teams often don’t use the information that is available for them to make optimal decisions. An example that was frequently discussed during the conference is the debate in football about “going for it on fourth down.” Historically, coaches have preferred to punt on fourth down, even though, in many situations, the data suggests that they shouldn’t.

With all the analytics that are now available it is surprising how often teams go against what the data say they should do. This is true for more than just “going for it on fourth down.” It is true for drafting strategy, negotiating player contracts, preventing injuries, and developing optimal practice plans–just to name a few.

Throughout the conference, I had several discussions with folks around why teams do not consistently use analytics to their advantage. A couple of reasons were repeatedly mentioned.

First, people who are coaching and/or managing teams often have a sports background, but not a math/statistics/analytical background. Because of this, they do not necessarily understand the methodology behind the numbers. And, given their role as a leader, they must be able to convince their team why they are taking the actions they do. If the leader of the team doesn’t understand the data or how it was generated, it becomes more difficult to inspire the team to act based on that information. Coaches tend to rely on what they know best, which often means doing what they’ve always done and not using analytics.

Second, many pointed out that because coaches’ decisions are so closely scrutinized (especially in major professional sports) that if they make a non-traditional decision, they leave themselves open for significant criticism from fans and the media–even if the decision, from a data analytics perspective, is the correct one. Therefore, it is often easier and more comforting to make decisions based the way it has always been done rather than do something different. As one person put it, “it is hard to get overly criticized for doing something that everyone has been doing for the last fifty years.”

From a coach’s perspective, I can appreciate these reasons. At the same time, I also realize that it is important to utilize any advantage that you might have, even if it is something that might stretch your comfort zone. I believe that we all need to question the way things have been done (whether you are working for a sports team, a Fortune 100 company, or a mom and pop company) to see if there is a different strategy that gives you a greater likelihood of succeeding.

At its core, that is what data analytics does. It allows end users to gain competitive insights. Oftentimes, these insights contradict conventional wisdom. In my view, this should be viewed as an opportunity rather than a threat!

By |2018-03-07T16:37:20+00:00December 1st, 2016|News, Research|1 Comment

How’s the fit? Is your selection process resulting in “good” hires?

As I’ve talked about in a previous blog post, a primary goal that drives our work at SDS is to achieve a 90%+ selection process accuracy rate. In other words, we want more than 90% of the people who are hired through our tools to be considered “good” hires.

In order for us to determine whether we have met that goal, we need information about how well these people are performing on the job. However, collecting information about people’s on-the-job performance can get tricky. It’s not always easy to determine with confidence who is a “good” hire and who is not. There are many questions we ask ourselves which help us to trust our performance evaluation data:

  • What types of information do we use to determine whether or not someone was a “good” hire?
  • Whose input do we consider to be the most useful?
  • How is this information collected?

The answers to these questions are critical factors which help us when determining if our selection process is resulting in “good” hires. I’m going to focus this blog on explaining some of the most critical factors we consider when collecting performance ratings.

CONTEXT MATTERS

When supervisors are asked to make evaluations on their people, the context in which these ratings are gathered can dramatically influence the true accuracy of the evaluations. For example, if a supervisor is asked to make evaluations that will be used to determine pay and promotion, the supervisor may rate someone differently than if they were evaluating a candidate for purposes of determining future training. In either of these cases, there are motivations that affect how a supervisor evaluates their subordinates that may have little to do with making precise, accurate evaluations. These motivations may include factors such as maintaining the morale of the supervisor’s team, motivating people to improve in the future, or ensuring subordinates receive pay increases.

Because of these motivations, we prefer to not use ratings obtained within an organization’s performance appraisal system. Instead, we prefer to collect our own ratings with a survey that is ONLY used for purposes of evaluating the selection process. We let evaluators know that their ratings of each individual will not be seen within the organization, and will be used for research purposes only…which ultimately leads to continuous improvement in their selection process. By doing this, it allows supervisors to focus on making accurate evaluations of their people without having to worry about the implications of these ratings.

PROCEED WITH CAUTION WHEN USING OBJECTIVE CRITERIA

For many jobs, people are evaluated by objective criteria, at least in part. For example, a production worker in an assembly plant may be evaluated by number of products assembled; a police officer may be evaluated by number of tickets written; or a car salesman may be evaluated by number of cars sold. On the surface, these criteria seem very reasonable. However, as Borman points out, there are several reasons why objective criteria often do not accurately reflect someone’s true performance on the job.

First, objective criteria may only reflect a small part of one’s job (e.g. the number of tickets written by police officer). Second, when using objective criteria, someone’s performance is often dependent on factors that are outside of his/her control (e.g. production worker is dependent on many others when assembling products). Finally, the numbers obtained from objective criteria might be difficult to evaluate. For example, a car salesman at one store might sell the same number of cars as a salesman at another store, but the market for cars at these stores could be very different.

Ultimately, when using objective criteria as a performance measure, you need to diligently research how these measures are obtained in order to ensure that the information truly reflects the person’s performance.

SUPERVISOR’S PERSPECTIVE DOESN’T ALWAYS TELL THE WHOLE STORY

When collecting performance ratings, the supervisor is often considered to be the person who is in the best position to make these evaluations. However, there is often value in collecting ratings from sources in addition to the supervisor. As many jobs are complex in nature and include working with people at all levels of an organization, ratings from sources such as peers, subordinates, customers, etc. might capture information about someone’s performance that isn’t reflected in supervisor-only evaluations. By collecting ratings from multiple perspectives, a more comprehensive portrayal of employee performance can be obtained.

Clearly, there are many issues to consider when collecting performance information. While the purpose of this article was to introduce some of the most common issues that we often wrestle with, we realize that each organization and each job has its own factors that need to be considered. As such, we realize that there is no perfect measure of how someone is performing, which in turn can make it difficult to determine who is a “good” hire. What we do know, through experience, is that following best practices for collecting performance ratings can greatly enhance the accuracy of this information.

By |2018-03-07T16:37:20+00:00July 12th, 2016|Research, Updates|1 Comment

Is Experience Overrated?

Companies routinely use “prior work experience” as a significant factor when recruiting and hiring new employees. A look at any job posting board will no doubt show that most jobs require candidates to possess a minimum number of “years of experience” to even be considered for employment.

Recently, however, this practice has come under fire. Many are now suggesting that experience is overrated, and that companies place too much weight on experience when searching for job candidates.

These folks often point to the fact that many of today’s most successful companies were started by individuals who had little to no experience at the time of start-up (e.g. Bill Gates, Steve Jobs, Mark Zuckerberg, to name a few). They also suggest that advantages of not relying on “experience” when hiring includes: (a) the need for diversity on teams, (b) experienced hires often need to unlearn the bad habits that they might have acquired over the years, (c) it is not as expensive to hire people who have less experience, and (d) experience from one company doesn’t always generalize to other companies.

At some level, I agree with these points. I would argue, however, that for most jobs, experience plays a role in who is likely to succeed. The problem is that by only looking at someone’s basic job history you are not likely to gain much insight into how that experience translates to on-the-job success. You need to dig a lot deeper than that.

JOB TENURE DOES NOT EQUAL JOB SKILLS

Just because someone has spent many years working in a job, it does not necessarily mean that that individual has acquired the skills and work habits that you might expect. Consider two candidates applying for a maintenance mechanic job. Candidate A has seven years of experience working at a manufacturing plant. Candidate B has two years of experience working at a chemical processing plant. An initial review of these candidates’ qualifications might very well assume that Candidate A is more qualified for the job than Candidate B, based on the fact that Candidate A has more years of experience in the maintenance mechanic field.

However, “years of experience” is often an inaccurate measure of relevant experience. Instead, you need to consider multiple factors when evaluating these candidates. Some of these factors might include:

Breadth of Experience

Number of years in a job does not always equate to breadth of experience. For example, let’s assume that even though Candidate A has been a maintenance mechanic for seven years, he has spent nearly all of his time rebuilding valves that are specific to his manufacturing industry. Candidate B, on the other hand, has worked in all areas of his facility exposing himself to hundreds of pieces of equipment. Clearly, Candidate B has a wider breadth of experience than Candidate A.

Complexity Level of Experience

Certain tasks are much more difficult to learn than others. For example, as a maintenance mechanic, troubleshooting equipment is likely to be much more difficult than performing tasks that are the same or nearly the same every time (such as rebuilding a valve). A Candidate who has experience performing tasks that are complex or performed in a challenging situation (such as responding to an emergency) is likely to be more valuable to an organization.

Quality of Work

Finally, just because someone has substantial experience performing a job, does not mean that the person has performed that job at a level that your organization would expect. With our maintenance mechanic example, let’s assume that Candidate A, over the years, has learned “shortcuts” to doing his job that lead to poor quality of work. These shortcuts might allow him to get by on his current job, but would not be acceptable in a different job.

The bottom line is that in order truly evaluate someone’s experience–“years of experience”–isn’t what is important. What is important are the skills and work habits that a person has acquired throughout his or her career. The answer to that question requires a more thoughtful assessment than simply asking for “years of experience.”

By |2018-03-07T16:37:21+00:00April 11th, 2016|Careers, Research|0 Comments

Accuracy: Fundamental to Hiring Success, But Difficult to Achieve

Today I’d like to write about a conversation that we have time and time again with our clients. Something that gets at the core of what we do, but often gets taken for granted. This topic is hiring process ACCURACY. As the foundation of our work, accuracy is fundamental, but not necessarily simple. Therefore, I’ll address it in three separate blogs, covering:

1. HOW do we measure if an employee selection process is accurate?
2. WHAT constitutes a good accuracy rate?
3. WHAT can we do to help clients improve their selection process accuracy?

The main goal that drives our work is the need to ACCURATELY predict who will be successful on the job. If an employee selection process isn’t accurate, you might as well be flipping a coin to make these important decisions. Everyone knows that. However, what people often don’t think about is what I will address first: How do you measure employment process accuracy?

There are lots of ways to look at accuracy, and we often work with clients to conduct in- depth analyses. But, to start out, we like to keep it straightforward. We like to know, at a basic level, whether or not a new hire is considered to be a successful hire.

To do this, we ask the supervisors the following question for each of their new hires (after they’ve been trained up and on the job for a minimum of six months):

“If you had the final say, and your vote was anonymous, would you hire this person again?”

As you can see, this is a fairly simple question, but it gets at the heart of what we are trying to accomplish with our selection process. If a supervisor answers “yes” to that question, you can be relatively confident that the new hire has worked out and he or she can be considered a successful hire. If a supervisor answers “no”, the hiring decision is considered to be a “miss.” Obviously, we want to maximize our “yes’s” and minimize our “no’s”.

With this information in hand we can calculate a hiring process “accuracy rate” by computing the percent of time that supervisors answer this question with a “yes.” While we realize that this isn’t a perfect measurement of hiring process accuracy, we do know that it has been a simple, easy way for us to keep tabs on how well we are doing. Once you’ve answered that simple question, you can move on to get a more in-depth, fine- grained selection process accuracy. More on that in future posts.

At SDS, our goal for every client is to achieve the 90%+ accuracy rate we’ve been consistently achieving since we began serving clients in 2001.

Over time, we have learned that there are many tools to help you select candidates (e.g. applications, tests, interviews, work simulation tasks, etc.), but no tool is perfect. Each tool has its strengths. The most accurate selection process uses a combination of tools, rather than relying on any single method to predict who will be successful on the job, and then monitors each of these tools to identify what’s working and what is not.

SDS helps you customize the best combination of tools, and our continuous improvement feedback loop means we’re monitoring the process for you to ensure you’re getting the most accurate, pace-setting process to gain a competitive talent advantage.

You don’t want to be flipping coins when making these important organizational decisions!

By |2018-03-07T16:37:21+00:00January 6th, 2016|Updates|2 Comments
Go to Top