THE TOP TEN COMPLAINTS ABOUT ASSESSMENT CENTERS AND HOW TO OVERCOME THEM
Walter S. Booth, Ph.D.
The BOOTH RESEARCH GROUP
This is the second of two articles which discuss the problems associated with assessment centers and offers ideas on how to overcome these problems. The first article addressed the most common characteristics of a poorly designed and poorly managed assessment center. This article presents the most common complaints about assessment centers - even good assessment centers - and discusses ways to address these complaints.
The assessment center method was originally developed to overcome weaknesses found with more traditional selection methods such as written tests and interviews. Research has shown that assessment centers have proven to be better predictors of supervisory and managerial success than any other process. Assessment centers are much better tools than written tests or interviews for the measurement of such critically important constructs such as leadership, decision making, interpersonal skills, and common sense (see Bentz, 1985; Bray, 1982; and Howard & Bray, 1988). There are other advantages of assessment centers over interviews and written tests as well.
Written tests consistently demonstrate adverse impact against protected classes while assessment centers do not. Efforts to reduce or eliminate adverse impact in written tests have proven futile over the past fifty years, and many psychologists are now reluctantly concluding that the disparate performance of minorities on written tests cannot be solved by "improving" the written tests themselves. Alternative testing methods must be developed.
The traditional interview, by the far the most widely used selection method, has also proven to be a less than satisfactory method for the selection of managers and supervisors (see Hogan, et. al., 1994). Most people selected with the interview method are done so on the basis of "first impressions", and are chosen principally for their "likeability". Many people think that if they conduct several interviews, they can overcome these problems, but the fact remains that candidates tend to tell interviewers what they want to hear rather than what they truly believe.
Although there is overwhelming evidence in support of assessment centers, there are critics of the process. Criticisms of poorly designed and badly managed assessment centers are not at all uncommon. But even well-designed and well-run assessment centers can face criticisms and complaints. Listed below are the most common complaints I have heard about the assessment center method, and ideas on how to address these complaints.
THE TOP TEN COMPLAINTS ABOUT ASSESSMENT CENTERS:
Complaint #1: "Mary came out first in the assessment center. Everyone knows that she is a good talker. She knows what to say in an assessment center to impress people, but when you get her out on the job, she doesnt do it."
There are many possible reasons for this complaint. First, it is certainly possible that the person making the observation may simply be wrong about his or her opinion of Mary. That is to say, this individuals judgement may be inaccurate or, at least, biased. When confronted with the possibility that the person making the complaint may not hold an accurate view of Mary, this same person will often claim that he or she is not alone in this opinion. "You can ask anyone in the department and they will tell you that Mary is incompetent." In many cases, the person is expressing his or her own biased beliefs and not those of everyone in the department.
But, let us assume, for arguments sake, that there is some truth to the observation, and that the assessment center was generally a good one. My first reaction to this complaint is that the problem is not with the assessment center, but rather with the organization.
That is to say, an assessment center measures whether a person knows what needs to be done, when it needs to be done, and how to do it. If a person has demonstrated this in an assessment center but is not demonstrating this on the job, then it strikes me that someone is failing to hold this individual accountable on the job. This person has shown that he or she knows what, when, and how to do the right thing. If they are not doing this on the job, then the persons supervisor is not performing adequately.
I do recognize, however, that as effective as assessment centers are, they have their limitations. I have yet to find any exercise which can duplicate the actual pressures of a real-life, serious tactical situation. The unique tensions and pressures of a hostage situation, searching an unfamiliar building in which an armed criminal may be present, facing an armed person who is mentally unstable, fighting a huge conflagration, handling a mass casualty event, or similar life and death situations cannot be adequately simulated. People may act calmly during a simulation but fall apart during a tense, real-life situation. While assessment centers put candidates under tremendous pressure, and we can assess a candidates knowledge of strategy and tactics, I believe that the evaluation of an individuals reaction to real-life emergency scenarios is best done on the job.
In addition, whereas assessment centers are good for measuring many important qualities in candidates, they do not accurately assess several important aspects of performance such as a candidates work ethic, his or her contribution to the organization, and similar historical aspects. What people have done within an organization to improve themselves and the organization is important. And I am not very sympathetic to those who claim that they havent been given the same opportunities as another to make contributions to an organization. While there are exceptions, in the vast majority of situations, people who want to make contributions can.
The assessment center method, particularly if it uses assessors from outside an organization is not a good method for evaluating a persons work ethic or work history. It is also not very amenable for evaluating a persons dedication to the job, and similar factors. I believe that these aspects of a persons career should be measured prior to promotion, but they should be assessed separately from the assessment center process.
Complaint #2: "I cant understand why Harry didnt come out on top of this assessment center. Harry is the kind of leader thats respected by everyone in this department. There isnt a person in this department that wouldnt trust following Harry into a dangerous situation."
Harry probably is respected by many in the department, and most of his peers probably would trust him with their lives. I suspect that Harry is the kind of person that, when leading a group of officers into a dangerous situation, has the safety of each of those officers upper most on his mind. Harry is the kind of supervisor that will ensure that these subordinates will come out of the situation and be able to go home to their families at the end of the day. Harry is just that kind of a leader.
Unfortunately, what is unclear to the person who raises this complaint is that the organization may not have been looking for that kind of leadership. Instead, the organization was seeking a leader capable of bringing the organization through dramatic changes. The Chief of the department knows that as the organization goes through these gut-wrenching changes, many employees will put up resistance, and some will even leave the department because they cannot accept the change. Basically, some very good people will become lost in the wake of these organizational changes. The type of leader needed for this difficult challenge is often different than the one whose employees know will be constantly vigilant for their well-being, comfort, and safety.
I see the problem stemming from this complaint as employees who were not well-informed about what the organization wants. I think there is a shared responsibility for this. The organization has the responsibility to inform employees about what is needed in order to succeed. The organization should keep employees informed of the departments general direction and philosophy so that they can choose the paths that best suit them. On the other hand, it is also important that employees take responsibility to learn of the kinds of changes that are impacting the organization and the position they are seeking. If people want to succeed in an organization, they should be exploring what the organization needs. At times, an organization may need a leader who can safely bring people through a dangerous tactical environment. At other times, it may need a leader who can bring the organization through tumultuous times.
Complaint #3: "Assessment centers are too subjective."
I think that many people have fallen into the trap of believing that "subjectivity is bad, objectivity is good". The fact is that subjectivity exists whenever humans evaluate other humans. This occurs when your supervisor evaluates you on the job, and it occurs when the public evaluates you or your organizations performance. When people dismiss a criticism or evaluation of themselves because it is "subjective", they miss an opportunity to grow and learn, and they ignore the very important reality that much of who we are is defined by how others view us.
I sometimes hear candidates say "I cant see why I was seen as an autocratic manager when you can ask anyone in the department and they will tell you that I am the most participative manager around. It is obvious that this was just one or two assessors subjective opinions about me." This perception is certainly possible, but "asking anyone in the department" is equally subjective.
I have grown concerned that in response to candidate complaints that assessment centers are too subjective, many practitioners of this methodology have gone too far in attempting to make it completely objective. This is typically done in two ways. First, in an effort to make the exercises objective and less susceptible to "subjective interpretation", they are designed with obvious, well-practiced, and clear-cut solutions. Second, the assessors are given a behavioral checklist against which they evaluate the candidates. If the candidate does 5 of the 10 behaviors listed, the candidates score is five. If the candidate performs 7 behaviors, the resulting score is seven, and so forth.
My concern regarding the preference of both some practitioners and some candidates for the use of exercises that are simplistic with obvious solutions is that, as one promotes to higher levels in an organization, the problems become more complex and the solutions are rarely black and white. In an effort to make the exercises less subjective, test developers have made them more unrealistic as well as far too simplistic.
Such simplistic problems are more amenable to behavioral checklists. Assessors simply have to see whether or not a candidate did a particular behavior. While this may be appropriate in assessment centers for an entry level position, it is hardly the way to evaluate supervisors or managers. Perhaps more importantly, it is often not that something is done, but rather how it is done that matters so very much in real life. To the extent that assessment centers reflect real life, "how" a person response will matter as well.
Candidates will sometimes tell me that they have spoken with another candidate who did better in an assessment center. These candidates have difficulty understanding why they scored lower than the other candidate, because they report that they both did the same thing. When behavioral checklists are used, if two candidates take the same actions, they will receive the same scores. In my assessment centers, however, it is quite possible for two people to take the same actions, but be evaluated differently because of the qualitative differences in the actions they took.
Take, for example, a situation in which two candidates state that they would attempt to transfer a problem employee whom they had worked with for two years but had made little or no progress. One candidate justifies this by saying that after two years, he would assume that he felt it should be someone elses turn to deal with this problem employee. The second candidate states that the reason he would transfer the employee is because after two years of no progress, perhaps it is not the employee so much as it is himself, the supervisor, or perhaps the environment that is the cause of the problems. By transferring the employee to another supervisor, he could eliminate himself as the cause. Furthermore, if the employee had to be terminated, it would be better to have two separate supervisors with documentation than just one.
In the above example, the first candidate might score low while the second candidate might well score high even though they both recommended the same action. While many of the possible options to the exercises can be discussed with the assessors prior to the assessment center, I have found that if realistic situations are used, it is not feasible to identify every potential response by candidates to every problem (i.e., a behavioral checklist). Instead, the assessors fully discuss the problems from various perspectives, but allow some flexibility in judgement.
The fact that not all possible behaviors can be anticipated and placed on a behavioral checklist for the assessors to use troubles some practitioners of assessment centers. As a result, these practitioners rely upon simplistic kinds of problems which tend to have clear-cut solutions. It is my bias to offer problems which are often not clear-cut, and which often do not have obvious or single solutions.
Now dont get me wrong. I think that there has to be some (indeed, I would say "considerable") objectivity in the assessment center process. I therefore limit the subjective evaluation of candidates, but do not completely eliminate subjectivity. Quite frankly, I am not convinced that anyone can eliminate subjectivity when humans evaluate one another in realistic settings.
Indeed, I think that one is more likely to simply hide subjectivity rather than eliminate it, and I would rather have any potential subjectivity known to everyone rather than to have it unknown or hidden. I also agree that it is important that the assessors be well-trained and in agreement on what they should be evaluating. Instead of masking subjective opinions, I have the assessors bring them into the open. I like to have assessors discuss their biases and to ensure that biases are not related to factors such as gender, race, age, or similar variables.
Subjective evaluations are a part of human life. Law enforcement and fire fighting personnel are constantly evaluated by the public. Keep in mind that these customers do not see you in a standardized, confronting the same problems in the same situation as do assessors. Furthermore, these customers know little about how you do your job, or even what the job of a law enforcement or fire fighting supervisor or manager entails. Assessors, on the other hand, typically see candidates in standardized settings, making comparisons easier. They often do not have historical baggage to overcome, and they are aware of the job requirements, and the organizational expectations of the job.
And finally, for those individuals who think that assessment centers are bad because they are too subjective while written tests are good because they lack subjectivity, consider the following. First, the selection of the source materials and more importantly, the development of written test items from that source material is far from objective. There are inherent biases whenever anyone writes a particular test item. Sometimes those biases are in the source material themselves. At other times, the test item developer may lean toward area or type of item versus another.
Second, what are your chances of getting a written test item with four choices correct if you havent a clue about the correct answer? One in four. Most people can eliminate at least one or two options and thereby their chances of getting the item correct becomes the same as flipping a coin. Now that, in my opinion, is hardly the kind of objectivity one should seek in a selection process.
Complaint #4: "I didnt get confronted with the same questions or problems as another candidate, and the assessors treated me differently. There is obviously a lack of standardization."
Assessment center exercises should be standardized. That is to say, candidates should be viewed in a standardized format, facing the same kinds of problems and issues. To this we can all agree.
But the issue raised in this complaint has to do with the concept of "fairness", and it has been my experience that there are many ways that people interpret fairness. Consider the very simple problem of an exercise in which candidates are asked a series of interview questions. One way of conceiving test fairness is to say that all candidates should be given the same amount of time in this exercise and all should have to answer all of the same questions. If the exercise is designed for a total of 30 minutes and there are 10 interview questions, then the only way to ensure that all candidates respond to every question and be kept at 30 minutes is to allow the candidates no more or less than 3 minutes per question.
Other people, feel that to be most fair, all candidates should be given a maximum time limit, but that they should also be allowed the flexibility to use as much time as they feel is necessary for each question. Their response to one question might last 5 minutes, whereas their response to another question might only be 30 seconds. Given this design, it is possible that one candidate might make it through all 10 questions while another might only make it through seven or eight.
Similarly, if the assessors are allowed to ask follow-up questions, there is the possibility that they might ask one candidate a question that they do not ask of another candidate. For example, one candidate might say that he or she would meet delegate an in basket item while another candidate might handle the item himself or herself. In a situation allowing for follow-up questions, an assessor interviewing the first candidate might ask questions probing the value of delegating this important item or, perhaps, why the candidate chose to delegate to the particular individual noted by the candidate. Obviously different questions would be asked of the candidate who did not delegate the item.
It is my preference to allow the candidates and assessors some flexibility. If an interview question happens to focus on a particular candidates strength, then let that candidate use more time in answering it. Or, if an assessor wishes to go into greater depth concerning a candidates response, I encourage it. While this is my preference, I believe it is critically important to seek the candidates input on these issues prior to the assessment center. In this way, the candidates contribute to the design of the process.
I believe it is important to allow assessors an opportunity to ask probing, follow-up questions during certain exercises such as the In Basket. The standardization of the process can be increased by giving the assessors example follow-up questions and guidelines on areas to probe. Even with these guidelines, however, there is the potential that one candidate may feel as though he or she was confronted with "a lot harder questions" than some other candidate. To that extent, I can only reply that fairness is in the eye of the beholder.
Complaint #5: "You can take a class or practice in order to do well in assessment centers."
There is an entire industry devoted to teaching courses on how to perform well in assessment centers. I know of one firm that has attempted to follow me around the country offering courses to candidates who are preparing for my assessment centers.
What I tell candidates is that the extent to which such a course makes them a better speaker, a better listener, a better writer, or a better decision maker, it may very well benefit them. To the extent that it teaches them "tricks" to gain an advantage, it will not help and may even interfere with their ability to perform well.
I strongly believe that a candidate should not prepare for an assessment center. What candidates should do is prepare for the job they are seeking. To the extent that they learn about the demands of the job, that they practice sound decision making, that they practice good supervisory or managerial skills, they should do well regardless of the individual characteristics of the assessment center process.
Certainly a person can "practice" for an assessment center. I encourage it. This practice should focus on the kinds of problems they are likely to face if they were to get hired or promoted. If they are likely to have to make budget presentations in the prospective position, then I encourage candidates for that position to learn about budgets and to practice making budget presentations. If they are likely to encounter supervisory problems, then I encourage candidates to give thought to potential supervisory problems which they might encounter, and try to resolve them. From my perspective this is not so much practicing for the assessment center as it is practicing for the job.
Complaint #6: "I saw a lot of game playing taking place in this assessment center. Also, I heard that what Sally told the assessors was just not true."
The problem of game playing in an assessment center can be a challenge for both candidates and test administrators. This is particularly true for certain exercises. As I noted in the previous article, I find the leaderless group discussion to be an exercise which is particularly susceptible to game playing. In it candidates can maliciously manipulate each other in a variety of ways. A good solution to the first problem, therefore, is to avoid exercises which have the potential for game playing.
I also think that game playing and the potential for dishonesty can be reduced by giving the assessors the flexibility to ask follow-up questions during the assessment center. By permitting the assessors to ask some probing, follow-up questions, the assessors can often see through those candidates who may be less than completely honest. I am reminded of the candidate who informed a group of assessors that he was just 12 hours shy of his undergraduate degree. By allowing follow-up questions it was learned that while this candidate was indeed just twelve hours short of attaining his degree, it had been 16 years since he last attended college.
Another aspect to this complaint is the concern of how candidates portray their backgrounds. As I noted in the previous article, the assessment center is not particularly well-suited to evaluating certain aspects of a candidate such as their work ethic or their contribution to the organization. I am amused when I see multiple candidates come through individually in an assessment center, and without knowing what the other candidates have said, all of them claim that they were the critical link in getting a project completed, or they were all "in charge" of a particular project.
What is "the truth" for these candidates? Again, I doubt that the assessors can ferret this out, so I rely on the department to do this. But quite frankly, I am not convinced that there is an objective way of anyone knowing many of these "truths". Some in a department might say, for example, that Candidate A was the person who most contributed to a particular project, while others in the department might argue that Candidate B was really the person who contributed the most.
So, while assessment centers are not particularly good at this divining process, I have yet to find any process for this purpose with which I have much faith. That is one reason why I tend to de-emphasize the evaluation of a candidates background and rather, have the assessors concentrate on what the candidates have gained from their experiences.
Complaint #7: "I didnt get enough time in front of assessors."
This is often a problem for larger departments. We have conducted assessment centers at the entry level for 1,000 candidates, and some of our promotional assessment centers have had well over a hundred candidates. There is no way around the fact that as the number of candidates increases, the less time each candidate will have in front of the assessors.
In addition, some exercises are difficult to fit within almost any time constraint. Role plays in which candidates are expected to resolve a personnel problem, or deal with a long-term problem employee are excellent examples of what might be viewed as "unrealistic time frames".
There are several solutions for this problem including, of course, restricting the number of eligible candidates through higher requirements or the use of preliminary screening devices (e.g., a written test). Furthermore, I have never seen an exercise which could be conducted in less than 15 minutes and still convey something meaningful.
It is also important to ensure that the time spent before the assessors is quality time. Exercises should be designed to eliminate preliminary or irrelevant activities, and to focus only on the task at hand. In addition, it is vitally important that a "personal touch" be given to each candidate. In other words, candidates should be treated as individuals, and shown care, respect, and understanding as they are brought through this process. They should not be treated as cattle moving to slaughter.
Complaint #8: "I didnt get any feedback about my performance, or the feedback I did receive was not valuable."
The Guidelines on Assessment Center Operations require that feedback be given to candidates. I believe that there is a strong moral and ethical obligation to provide information to candidates about their performance. My firm offers two types of feedback. The first is a multi-page document which reveals individual candidate scores on the exercises and performance dimensions, and which also compares those scores with both department and national norms. In addition, we provide a narrative which summarizes the assessors comments. The second type of feedback involves video taping candidates during the process and allowing them (and only them) to view that video tape during a meeting with a psychologist from my firm.
Even with this extensive feedback, I sometimes hear a complaint that a candidate did not feel the feedback was sufficient. There are several problems which I believe lead to this feeling of frustration. First of all, while the vast majority of candidates are genuinely interested in feedback, I have had some candidates show up to their feedback session with their attorneys! Indeed, some candidates in some organizations want to use feedback as means of bringing additional arguments to court. While this is rare, it causes consultants to be extremely cautious in what is provided in the feedback to candidates. This caution can result in feedback which may not be as thorough or as frank as that desired by the candidates.
The most common problem I encounter are candidates who simply cannot accept criticism. These candidates are easy to identify in face-to-face feedback sessions because they begin the session by listing for the psychologist all of the reasons they did not do well (they had a cold, they were going through a divorce, they just found out that their uncles friends wife might have cancer, etc.). They then proceed to deny that they did or said whatever they were observed doing or saying in the assessment center. They blame the assessment center for being too subjective, and the assessors for not listening to what they were really saying.
Such candidates are seldom accepting of the results of assessment centers, much less the kind of feedback that can be provided. They are in denial about themselves and how they are perceived by others. They cannot admit that someone could perceive them in a way which differs from how they believe they are normally perceived. In such a situation, there is very little in the way of valuable feedback which can be provided to these candidates.
Finally, one of the problems related to feedback is that in almost every case, a test consultant has been hired to conduct a selection test and not an employee development session. There are some exceptions. I have some clients who ensure that the process involves employee development by contracting for the enhanced candidate feedback mentioned earlier. I have even worked with organizations which have offered an assessment center "for training purposes only" prior to the assessment center for selection purposes. This is, of course, rather expensive but the benefits to the individuals and the organization proved enormous.
Complaint #9: "The Chief has too much influence over the process."
I sometimes find this complaint from candidates amusing because Chiefs often worry that they are at the mercy of the test consultant, and have little or no control over the process or the outcome. I also have many clients whose Chiefs do not wish to have any involvement in the selection or promotional processes.
It is true, and it is my preference, that the Chief of the department have some influence over the testing process. The Chief sets the general direction of the department, and coordinates the development of the department philosophy, vision, and values. To the extent that this information is incorporated into the testing process (as I strongly believe it should be), the Chief can have influence over the testing process.
The degree to which a Police Chief influences other aspects of the testing process can be enhanced or reduced depending upon the comfort level of the candidates, the civil service commission, the human resource department, and the consultant. For example, if the organization wants to minimize the Chiefs influence, then the consultant should be responsible for the selection of assessors. If, on the other hand, the organization is comfortable with some influence from the Chief, he or she might select the assessors and participate in the assessor training in order to offer his or her insights into the organization and position.
I believe that the degree to which a chief executive of an organization influences a promotional testing process should reflect the desires of the organization for a fair and unbiased process, and for a process which supports the goals and philosophy of the department. Candidates should be able to voice their concerns about the amount of influence a Chief has on the process, and candidates should also be informed of the degree and type of influence a Chief will have on the process.
Complaint #10: "Assessment centers are too costly, too time consuming, too labor intensive."
Assessment centers are indeed costly, time consuming, and labor intensive. Whether they are too costly, too time consuming, or too labor intensive is a matter of debate. For example, while assessment centers usually have higher initial costs than other techniques such as written tests or interviews, those organizations which have suffered legal challenges, consent decrees, a workforce which does not reflect society, or other limitations of written tests may argue that the long term costs of assessment centers is much less than those of written tests.
Indeed, I believe it is the tight budgetary constraints in todays public sector that most strongly argues for the use of the best predictor of job success. With tight budgets, the costs associated with selecting or promoting the wrong individuals can no longer be tolerated. As one Chief said to me recently, "There are no longer any places in this organization to hide an incompetent employee." The impact of poor supervisors not only creates problems with the problem employee, but there is also the considerable negative influences that the problem employee has over his or her subordinates and peers. Like a cancer, a problem employee grows within an organization and poisons it. In my opinion, organizations can no longer afford not to use the best selection techniques available.
Any promotional process can be the source of complaints. Assessment centers are particularly susceptible to complaints because they involve human judgement, they are often unfamiliar to candidates, as well as a host of other reasons. But the fact remains that the assessment center method is the very best method for selecting and promoting personnel. Given this fact, it is better to utilize this important and valid method, and to take steps to avoid complaints, than to shy away from the method and use other techniques which, while perhaps less susceptible to complaints, are much poorer for selecting the very best.
Bentz, V. J. (1985, August). A view from the top: A thirty year perspective of research devoted to discovery, description, and prediction of executive behavior. Paper presented at the 93rd Annual convention of the American Psychological Association, Los Angeles.
Bray, D. W. (1982). The assessment center and the study of lives. American Psychologist, 37, 180-189.
Hogan, R., Curphy, G. J., & Hogan J. (1994). What we know about leadership. American Psychologist, 49, 493 - 503.
Howard, A., & Bray, D. W. (1988). Managerial lives in transition: Advancing age and changing times. New York: Guilford Press.
International Personnel Management Association (1989). Guidelines for Ethical Considerations for Assessment Center Operations. Public Personnel Management, 18.
Return to top of page
home | assessment centers | promotional testing | entry-level testing | performance evaluation
human resource consulting | featured client profile | articles | candidate advice column | about us