The U.S. Equal Employment Opportunity Commission

Commission Meeting of May 16, 2007



STUART J. ISHIMARU, Commissioner


PEGGY MASTROIANNI, Associate Legal Counsel

This transcript was typed from a video tape provided by the Equal Employment Opportunity Commission.


Announcement of Notation Votes

Motion to Close a Portion of the Next Commission Meeting


Introduction of Topic:

Richard Tonowski

Carol Miaskoff

Panel I

Jean Kamp

Paula Liles

Jeffrey Stern

James Robinson, Sr.

Panel IIA

Ken Willner

Cyrus Mehri

Rae T. Vann

Adam T. Klein

Panel IIB

Shereen Arent

Lawrence Ashe

Fred Alvarez

Panel III

James L. Outtz

Kathleen K. Lundquist

Motion to Adjourn


CHAIR EARP: Now the meeting will come to order. In accordance with the Sunshine Act, today's meeting is open to public observation of the Commission's deliberation and voting. At this time, I'm going to ask Bernadette Wilson to announce any Notation Votes that have taken place since the last Commission meeting. Ms. Wilson?

MS. WILSON: Good morning, Madam Chair, Madam Vice Chair, Commissioners. I'm Bernadette Wilson from the Executive Secretariat. We'd like to remind our audience that questions and comments from the audience are not permitted during the meeting, and we ask that you carry on any conversations outside the meeting room, departing and re-entering as quietly as possible.

Also, please take this opportunity to turn your cell phones off, or to vibrate mode. I would also like to remind the audience that in addition to the elevators, in case of emergency, there are stairways down the hall to the right and left as you exit this room. Additionally, the rest rooms are down the hall to the right.

During the period April 17th, 2007 through May 14th, 2007, the Commission acted on five items by Notation Vote:

Approved litigation on two cases;

Approved an RFP for a full service publication storage and distribution center;

Approved a motion to reconsider the Headquarters Project Management and Relocation Services Contract; and,

Approved the Headquarters Project Management and Relocation Services Contract.

Madam Chair, it's appropriate at this time to have a motion to close a portion of the next Commission meeting in case there are any closed agenda items.

CHAIR EARP: Thank you, Ms. Wilson. Do I hear a motion?


CHAIR EARP: Is there a second?


CHAIR EARP: Any discussion?

COMMISSIONER ISHIMARU: Madam Chair, I'd like to explain my vote, and also to thank Commissioner Griffin for asking for reconsideration of the vote on the relocation services contract, which was the subject of our last meeting. And I wanted to thank you and your staff for providing Commissioner Griffin and I with information that was very helpful in allowing us to cast a vote for the contract. I know at our last meeting, there was a long discussion about the contract and the move, and it was contentious at times. And I was sorry for that, but I was glad we were able to get the information we needed to make the vote. I know that we are going to move, likely, and we do need a contract to deal with those services. I know that you're in control of the move, and I also appreciate that. And I know everyone is anxious to see where we, in fact, may move to. But, again, I want to thank you and Commissioner Griffin for allowing this to happen so we didn't have a premium that we would have been forced to pay on the contract. Thank you, Madam Chair.

CHAIR EARP: Thank you. Can we have a vote on the motion? It has been properly moved and seconded. All in favor?

(Chorus of ayes.)

CHAIR EARP: Opposed? [No Response] The ayes have it. Thank you, Ms. Wilson.

Good morning, again, everyone. Welcome to EEOC's public meeting on employment testing and screening. We are honored today to have nationally recognized organizational psychologists and advocates of employers, as well as employees, who will share their experiences and perspectives on employment testing and screening. We also are honored to host some of EEOC's own litigators, and charging parties, who have experienced legal issues associated with testing and screening criteria. We look forward to learning from your remarks, and to receiving your recommendations about effective ways to focus the Commission's resources in this area.

Contemporary employers commonly use a range of employment tests and other screening tools to make hiring, promotion, termination, or other employment decisions. For example, many employers use basic literacy tests, personality tests, medical and fitness tests, and credit checks are often used as a gauge of employment worthiness.

The goals of today's Commission meeting include the following: to gather information about testing practices which abide by the requirements of EEO law; and to educate about emerging trends in employment testing and screening; to educate about EEO laws prohibiting discrimination in employment testing and recent EEOC and other litigation; to discuss increasingly common employment screens, such as criminal background checks and credit checks, and the discriminatory impact they have, potentially, on people of color; to explain when employment tests are medical examinations subject to the Americans with Disabilities Act's restrictions; and to receive recommendations from the panelists about effective ways to focus the EEOC's resources in this area.

Before we begin - no opening statements? Okay.

COMMISSIONER ISHIMARU: Do you want to make one?

CHAIR: Commissioner, no? Okay, there being no opening statements from my fellow Commissioners --

COMMISSIONER ISHIMARU: Madam Chair, we were told we'd have a chance to make closing statements at the end.

CHAIR EARP: Absolutely.


CHAIR EARP: Thank you Commissioners.

One housekeeping note before we begin. I would like to remind everyone that we have a full agenda, and we're on a tight timeline. To afford each panelist their fully allotted time to speak, and to allow the Commissioners to ask questions, I've asked Legal Counsel to make use of our timing lights. Speakers, you know where the lights are, right? The timing light will turn yellow, giving each panelist and Commissioner a one-minute warning. When the light turns red, it means stop, your time has expired. You may finish your thought, but please respect the time limit so that those scheduled later in the meeting are not rushed.

MS. MASTROIANNI: Excuse me, Madam Chair, actually, the yellow light is a two-minute warning.

CHAIR EARP: Okay. You have one minute more.


CHAIR EARP: So let's turn to the substance of our meeting. We'll begin with EEOC Staff, Richard Tonowski, our Chief Psychologist from the Office of General Counsel, who will provide more information about employment tests. And also, Carol Miaskoff, the Assistant Legal Counsel. Thank you.

MR. TONOWSKI: Thank you. Good morning, Madam Chair, Madam Vice Chair, Commissioners, and distinguished panelists. This morning, I'm giving you a brief overview on testing threats and promises.

An employment test is any procedure used to make an employment decision. There are the familiar multiple choice tests of job knowledge or basic cognitive ability, such as doing arithmetic, or understanding written instructions. There are also physical ability tests, as simple as moving a box from here to there, or as sophisticated as measuring the strength of specific muscle groups. Beyond assessing strength and endurance, is the issue of pre-employment medical inquiries for both current and potential conditions, both physical and psychological. Background checks are extensively used, and sometimes include examination of credit history. Finally, for the last 20 years, multiple choice personality and integrity tests have become increasingly popular. In all, employment testing is widespread and increasing.

A mature technology of testing promises readily available methods that serve as a check against both traditional forms of discrimination, as well as the workings of unconscious bias. If that is a promise, then the threat comes from institutionalizing technical problems not yet fully addressed, the undermining of Equal Employment Opportunity under the guise of sound selection practice, and the unintended introduction of new problems that will require resolution to safeguard both test takers and test users.

Since the Supreme Court's landmark decisions in the 1970s, which Carol will reference, some notable things, not fully envisioned, have happened to testing.

Understanding of cognitive ability in jobs and tests to measure it greatly increased. Statistical approaches to summarizing results across many separate studies to reach general conclusions became widely used. The limit to these generalizations remains a matter of contention.

How test scores are reported became hotly debated. Congress made unlawful test score adjustments based on protected class. Some psychologists propose banded rather than discreet scores as a means of promoting both diversity and test utility. Opponents have objected on both technical and legal grounds.

The enactment of the Americans with Disabilities Act, or ADA, restricted medical exams and disability-related inquiries for applicants and employees. There remains the issue of differentiating proper tests of competencies from unlawful medical investigations.

Sometimes promise and threat arrive together. Personality testing has been hailed by some as a means for a more complete, and thus, more valid assessment of potential employees. It may reduce the adverse impact associated with cognitive ability testing used alone. Well, when does legitimate inquiry into an applicant's qualifications become an intrusive search for medical conditions? The Seventh Circuit held in Karraker v. Rent-A-Center, Inc., that the line is crossed when the personality assessment tools and instruments, such as the Minnesota Multiphasic Personality Inventory, or MMPI. The MMPI is widely used for clinical diagnosis, and was originally normed on hospitalized psychiatric patients. However, researchers have combined its questions in new ways to measure a variety of traits, not all of them clinical. Some of its current 567 questions explore whether the individual sees things that others do not, has laughing or crying fits, or was compelled to do things under hypnosis. The Court of Appeals held that the MMPI, by its nature, was a medical examination violating the ADA. An earlier case, Soroka v. Dayton Hudson, Inc, that arose in California, and was ultimately settled, raised similar issues, as well as views of test usage at the individual item level that alarmed psychologists. This is but one instance where science and law intersect, and where the outcome has real consequences to employers and to potential employees.

EEOC is seeing a limited but increasing number of tests in the course of its investigations. Sometimes we are gratified by the care given to both technical and EEO considerations. There are times when superficial work and unsupported conclusions come from consultants who should know better. This kind of work constitutes a threat to job applicants, employers, and all concerned with good selection practice.

On the plaintiff's side, occasionally there are arguments backed by a highly selective reading of the research literature that whatever the employer did was unacceptable simply because the employer might have done something else. This also presents a threat to good selection practice. But first, legal bases from the Office of Legal Counsel.

MS. MIASKOFF: Thank you very much, Rich. Good morning, Madam Chair, Madam Vice Chair, Commissioners, distinguished guests. I am Carol Miaskoff, Assistant Legal Counsel for Coordination. I will give a very short summary of some major legal principles relevant to employment testing to serve as a reference for the presentations that you will hear this morning.

Before starting, however, I would like to recognize Kerry Leibig and Mary Kay Maurin, both Senior Attorney Advisors in the Coordination Division, Office of Legal Counsel. Each of them contributed greatly to the preparations for this meeting, and I appreciate their efforts.

In Title VII of the Civil Rights Act of 1964, Congress expressly allowed professionally developed ability tests, but only if they were not designed, intended, or used to discriminate because of race, color, religion, sex, or national origin. In the 1991 amendments to Title VII, Congress added Section 703(k), which added provisions on scoring tests and enshrined some provisions subsequently developed in the case law.

To go back to that case law, the Supreme Court considered the basic standard from Title VII that permitted non-discriminatory tests for the first time in Griggs v. Duke Power Company, which is the seminal Supreme Court case on employment testing. The facts in Griggs involved a workplace with five operating departments ranging from labor, at the bottom, to laboratory and tests at the top.

In 1965, the company abandoned its policy of restricting African Americans to the labor department. At the same time, however, the company made completion of high school a prerequisite to transfer from the Labor Department to any other department. Also, as of July 2nd, 1965, which coincidentally was the effectiveness date of Title VII, the company announced that to qualify for placement in any but the Labor Department, it would be necessary to receive satisfactory scores on two professionally developed aptitude tests, as well as have a high school diploma.

A vice president for the company testified at trial that these requirements were instituted to generally improve the quality of the workforce. When this case came to the Supreme Court, the Fourth Circuit already had found that whites registered far better on the company's alternative requirements than blacks. The Supreme Court in its decision in Griggs held that the legality of these professionally developed tests turned on whether they were job-related, noting that, "The touchstone is business necessity", the Court also stated that Title VII forbids employers from "giving these devices and mechanisms controlling force unless they are demonstrably a reasonable measure of business performance."

In Albermarle v. Moody, a 1975 case, the Supreme Court expanded on the standard, emphasizing that the tests must be closely related to the job in question. Subsequently, the EEOC, acting with the Departments, Labor and Justice, and the agency now known as OPM, adopted in 1978 the Uniform Guidelines on Employee Selection Procedures, affectionately known as UGESP.

UGESP was published at a time when lawyers and psychologists were confronting the differences between judicial and scientific approaches to assessing the effects of employment tests. UGESP provided uniform federal guidelines for establishing when employment tests were not discriminatory.

Beyond Title VII, I would like to quickly mention Title I of the Americans with Disabilities Act, and the Age Discrimination in Employment Act. Title I of the ADA regulates when employers who are covered by the law may make disability-related inquiries, or require applicants or employees to undergo a medical exam. Under the ADA, these inquiries and exams are prohibited pre-offer, allowed post offer if everyone entering the same job category gets the test, and regulated during employment.

I will quickly finish here. The ADA also has several provisions that specifically relate to tests. It makes it unlawful to use employment tests that screen out an individual with a disability or class of such individuals unless the test as used by the employer is shown to be job-related and consistent with business necessity, there's that standard again. It's unlawful to fail, to select, or administer employment tests in a way that does not ensure that the tests accurately reflect the skills, aptitude, and whatever factor that's being measured, rather than reflecting an applicant's or employee's impairment. And finally, reasonable accommodation in testing is required.

In conclusion, I want to add that the Age Discrimination in Employment Act also prohibits discrimination based on age, 40 or over, with respect to terms and conditions, and privileges of employment, which include selecting applicants or employees for hiring, promotion, or reductions in force. The ADEA also prohibits disparate impact discrimination, unless the challenged employment action is based on a reasonable factor other than age.

This concludes my short summary. You will now hear from EEOC litigators and charging parties, who will bring these principles to life. Thank you.

CHAIR EARP: Thank you for setting the stage. May we have the first panel? On the first panel we have Jean Kamp, Associate Regional Attorney in the Chicago District Office, and she's accompanied by Paula Liles, a Charging Party. Then we'll hear from Jeffrey Stern, Senior Trial Attorney in Cleveland, and he's accompanied by Charging Party, James Robinson, Sr. Thank you. Jean?

MS. KAMP: Thank you, Madam Chair, Madam Vice Chair, Commissioners, ladies and gentlemen. I've been asked to provide testimony to you about the EEOC's litigation against the Dial Corporation, which resulted in a final judgment in favor of the EEOC for about $3.7 million, and the ending of the use of what they called their "Work Tolerance Screen".

The case began with Paula Liles, who's sitting right here, a very brave and very tenacious woman who, among other things, sat through the six-day trial. She had worked at Dial through a temporary agency in their plant in Fort Madison, Iowa, where they make Armor Star Sausage products. She knew, therefore, what the job was like that she was applying for. It was basically an entry level production job, which required carrying rods of sausages, which weighed about 35 pounds, back and forth all day long, putting them up and down on a rack, basically, at various levels ranging from about 35 inches off the ground, to about 65 inches off the ground. It was a very difficult job.

At trial, there was testimony that the ability to do that kind of lifting is very much correlated with gender, that about 90 percent of men are able to do it, and about 10 percent of women are actually able to do this.

Ms. Liles had watched it. She knew that she was able to do the job. She applied for it. She went through a very detailed application procedure that Dial used in selecting people. She was offered the job subject to a pre-employment physical exam. At just this time, unfortunately for Ms. Liles, they instituted this Work Tolerance Screen, and what it was was a strength test. And it looked quite a lot like the job. What you did is you had to pick up a rod with 35 pounds of weights on it off of the table, walk with it about 10 feet to a rack, put it at 35 inches, take it off, walk back to the table, turn around, walk back to the rack again, this time move it to 65 inches, take it off, go back to the table, repeat the cycle for a seven-minute test period.

Paula took the test, and she was, in fact, able to do it. She completed the test, and they had a little scoring sheet on it, and she was marked pass on that scoring sheet, but they did have a comment on it saying "lifting up over her head was difficult because of her height". So she left, I believe, thinking that she was all set, that she passed the test, but the job offer was later withdrawn on the grounds that she had failed it.

The test had been designed by the plant nurse and an occupational therapist at a local hospital. The plant nurse took notes while she watched people taking the test.

Paula failed the test under this definition, and I believe about 16 of the 22 women who took the test at the same time also failed it. Paula filed a charge and the litigation started. Over the next four years, a couple of hundred people, I believe, were actually given this test. Ninety-seven percent of the men passed the test, 38 percent of the women passed the test. As a result, the hiring went from being 50 percent female before the test, to 15 percent after.

We tried the case in August of 2004. At trial, Dial argued content validity, that the test looked like the job, and therefore, was okay. And, also, criterion validity, that because the test was employed to reduce injuries, and that because it did that, it was okay. The problem was that competing experts made it clear that neither one was true. The test, in fact, was a great deal more difficult than the job, which is why women were able to do the job, and had been doing the job before the test, but were not able to pass the test.

I see my time is up. Let me just say the criterion validity was just as bad. In fact, women were no more likely to be injured than men, either before or after the test. And I'll stop at that point.

CHAIR EARP: Good. Thank you.

MS. LILES: Good morning ladies and gentlemen. First, I would like to thank the people for asking me here.

Pre-employment testing is something that should be taken seriously, and handled very carefully. In my case, I was employed as a temporary at the Dial Corporation, and I wanted a full-time job with them, so I filled out their application, and went through their hiring process.

I received a letter in the mail with a potential start date. The only thing left for me to do was what they called their Work Tolerance Test, something that had never been done before. It was to see how you could physically perform one of their jobs in their plant, this test, but I had no worries. I knew I was a fairly fit person, and I had performed various jobs in the past, and had no complaints. I did their test, and I was told that I had passed.

Then the devastation began. I received another letter in the mail from them saying they did not want my employment due to my height. In an instant, as I was reading this letter, every emotion a person can have went through me. I thought I was going to fall apart.

I went to the proper agency. They took it to the EEOC. They did an investigation, and eventually filed charges on the Dial Corporation on my behalf, and on the behalf of several other women. Every since that day seven years ago, I have been affected by that test in every part of my life. I have felt like I've been on a roller coaster. It was going down, and I was going to go down with it.

The EEOC did win the case. I got my job back with all my back pay, but it didn't stop there. Three months later they fired me, saying once again that I couldn't do the job. The union took it to arbitration, and once again, we won.

In the last seven years, my credit has been ruined, my reputation as a hard worker has been on the line. I've been in and out of therapy, and not to mention how much harder it was being a single parent than it already is. My heart was broken when my daughter graduated in 2006, and she had to pay for her own class ring, her own senior pictures, and even her cap and gown for graduation day. A parent is supposed to be able to reward a child for the biggest accomplishment so far in their life. I couldn't do that with no job. This whole ordeal had quite an impact on me and my family, something I will carry with me for life.

In conclusion, it was a great win for all of us involved, the EEOC, myself, and Jean Kamp, who I will be forever grateful to. She won this case, got me my job back with my back pay which is helping me pick myself back up, and for that I owe her, and everyone involved in winning this case a great big thank you. Thank you.

CHAIR EARP: Thank you. Jeff?

MR. STERN: Thank you, Madam Chair and Commissioners for inviting me to be here today, and talk to you briefly about a case that EEOC brought on behalf of one of our courageous charging parties, James Robinson, Sr., and a nationwide class of African American test-takers seeking apprentice positions, which we got resolved with a settlement agreement, which required changes in employment testing.

Mr. Robinson, and 12 other African American apprentice test-takers, filed discrimination charges against their employer, Ford, against their union, the UAW, and against the Joint Apprentice Program. They alleged that they and blacks, as a class, had been denied acceptance into the Employer’s and Union's Joint Apprentice Program due to their race. Two charges were filed in Cleveland concerning a test episode in Walton Hills, Ohio, and 11 charges were filed in the Cincinnati area office concerning a test episode in Sharonville, Ohio.

Ford's Apprentice Training Selection Program, the ATSS, was a paper and pencil cognitive test, which applicants needed to pass in order for entry into the Apprenticeship Program. A written cognitive test can assess aptitude for mechanical tasks by measuring verbal, numerical, or spatial reasoning.

The Apprentice Program at Ford was a stepping stone to highly coveted certification as a Skilled Trades Journeyman with increased earnings, increased job security, increased job mobility, as well as prestige.

The Commission's extensive investigation ripened into an EEOC determination that the ATSS had a disparate impact on the charging parties, and on blacks as a class. Only two of 51 African American test-takers passed the test at the 1998 Sharonville test episode, compared with about one-third of the white test-takers who passed that test. Nationwide, about 13,000 Ford employees took the ATSS at more than 40 facilities between February 2000 and June 2003, and about 25 percent of those test-takers nationwide were African Americans.

The EEOC's investigation found that the ATSS was properly validated. The real problem we assessed was Ford's efforts to find a less discriminatory alternative had become inadequate because it had not been updated on expert knowledge which became current by the mid-1990s. In our view, Ford needed to bring itself into compliance with Title VII by considering less discriminatory selection procedures, such as work sampling, or trainability tests. Work sample requires the applicant to perform the task or job in question, trainability is done the same way, except it has a period of instruction for applicants who are not familiar with the job, or the task in question.

EEOC's extensive investigation and intense conciliation under EEOC auspices allowed the parties, Ford, the Union, class members, to resolve the case with a settlement. The settlement was far-reaching, advanced the public interest. Ford agreed to stop using the ATSS test. Ford agreed that an industrial psychologist jointly selected by all the parties would design a new selection system for apprentices. The new process would be designed to both predict job success, and to reduce adverse impact, and to provide feedback to the test-takers so they could appreciate where their strengths and weaknesses would be.

The settlement expanded skilled trades opportunities for African Americans at Ford. Ford agreed to place 279 of the qualified class members on to the apprentice eligibility list, and that was to redress the shortfall the EEOC determined was resulting from the ATSS. Finally, the settlement provided more than $8 million of monetary relief to the nationwide class of approximately 3,420 African American test-takers who were not selected.

It's not enough to validate a discriminatory employment test when it's designed; employers need to review their tests to assure that newer, less discriminatory alternatives are considered. Thank you.

CHAIR EARP: Thank you.

MR. ROBINSON: Good morning, ladies and gentlemen. My name is James Robinson, Sr., and I'm pleased to be here today to tell you about my testing experience.

I'm a 48-year old African American, and I've worked for Ford Motor Company since mid-November of 1996. I was hired as a manufacturing technician at the Sharonville Plant near Cincinnati, Ohio. I am also a UAW Local 863 member. I'm married with two children, a son and a daughter, who I'm always telling that life is full of opportunities, and to take advantage of everything it has to offer, and not to let anyone stop you from being whatever you desire to be.

In January of 1998, I, and hundreds of other Ford employees, took the apprenticeship test to qualify for 117 apprenticeship positions available at Ford's Sharonville and Batavia plants. In February of 1998, the UAW and the company wrote me a letter advising that unfortunately my name did not appear among those provided as qualified.

To qualify, you had to score within the top 70 percent of everyone who took the test. The test was easy. It had about 120 questions in four parts. Only those employees who requested their score received them. I had to wait three months to get my score, but I was told I did not fall within the top 70 percent. I was not told which questions were right, or which ones were wrong. To my knowledge, no African American from the Sharonville plant, and only one or two from the Batavia plant were selected as apprentices from the January 1998 test.

Prior to taking the test, some African Americans told me that we would not be put into the apprentice program because blacks were not accepted. This was not the first time this happened. I saw the financial and personal harm that exclusion from the apprenticeship test caused me and my African American co-workers. We each lost about $4 an hour increase that we would have received as apprentices and journeymen.

At first I was angry, outraged, and discouraged. A lot of us felt betrayed that this type of discrimination still exists, but then I realized that we cannot allow anyone to discriminate against us. We had to stand up for what we believe, so I decided in 1998, in October, that we would file charges against Ford and the UAW. We received a determination letter from EEOC that was in our favor. It was determined that the test had a disparate impact on the charging party and on Blacks as a class.

I took the test again in 2003. I am now apprenticing as a Millwright at the Sharonville plant, and expect to earn my journeyman certificate in December of 2008. I am pleased that other qualified African Americans have joined me as an apprentice at Ford as a result of this settlement. I thank you very much for letting me share that with you today.

CHAIR EARP: Thank you. Thank you, Ms. Liles, Mr. Robinson, for your bravery.

Questions and comments from Commissioners? Vice Chair?

VICE CHAIR SILVERMAN: Thank you, Madam Chair. I just wanted to say that I think today's meeting addresses a critical topic and I want to thank all the folks at Legal Counsel for all their hard work in arranging this meeting. And all you have to do is look at the witnesses that you've assembled here to know that you've put a lot of thought and hard work into this meeting, we really do appreciate it. I want to thank Carol and Rich for providing an overview of the key issues. And I also want to thank Jean and Jeff Stern for your presentations. And most importantly, for the work you do, and the work you performed on these cases.

I want to express my sincere appreciation to Ms. Liles, and to Mr. Robinson. It's such a pleasure and an honor to have you here today. We really appreciate your coming forward, and filing your charge with the Commission so that we can know about these issues, and do something about it; and coming here is just icing on the cake for us, so thank you so much.

I just have a couple of questions for Jean. In the Dial case, how many women accepted the job, and do you know - have any idea of how many are still there?

MS. KAMP: Yes. Approximately, I believe it was 16 women of the 53 total group actually accepted the job when it was offered to them after the court's decision in the case, and I believe eight or so are still there, which is consistent with about 50 percent turnover rate with other employees.

VICE CHAIR SILVERMAN: Paula, are you at Dial now?

MS. LILES: No, I'm not. I did go back after the arbitration, and it was very hard for me. I was treated very badly, so I didn't want to stay there.

VICE CHAIR SILVERMAN: But you don't regret coming forward and bringing the issue.

MS. LILES: No, I don't. I don't at all.

VICE CHAIR SILVERMAN: Mr. Stern, do you have information on the status of the 279 African American employees hired as a result of the settlement?

MR. STERN: All of the employees were class members, are and were incumbent Ford employees, so it's not a hiring situation.


MR. STERN: It's an apprentice program. Ford actually exceeded the 279 agreed remedial apprenticeships, and there are 282 class members who accepted offers that were made. Most of those individuals are on waiting lists until business conditions allow Ford to have more entrants into the apprentice program. The apprentice program requires about 8,000 hours, if I'm not mistaken, of apprenticeships before an individual be certified as a journey person.

VICE CHAIR SILVERMAN: Mr. Robinson, how did you know to come to the EEOC with your issue, and what was it that finally drove you to us on this process, during this process?

MR. ROBINSON: Well, I knew something wasn't right. Whenever someone offers you a test, and then they refuse to give you your results, they're hiding something, so I wrestled with that thought for a couple of months, and then I realized that that had happened in the past. And that if we were going to change that, somebody had to take the initiative to do that, so I took it upon myself to go around the plant and ask people if they wanted to get involved in trying to fight this thing that Ford had been doing for years. And I managed to get 10 other people to go with me, and that's when we went downtown in October and filed the complaint.

VICE CHAIR SILVERMAN: So you were aware of the EEOC and what you needed to do. It was just a question of deciding in your own heart to do it, and then, of course, the leadership that you took, role in the office.

MR. ROBINSON: Well, I knew there was an Ohio Civil Commission.


MR. ROBINSON: And they told me that I needed to go to the Equal Employment institution. But I'm thankful that you guys were able to assist us in that.

VICE CHAIR SILVERMAN: And we're thankful that you did. Thank you.

CHAIR EARP: Commissioner?

COMMISSIONER ISHIMARU: Thank you, Madam Chair. Mr. Robinson, let me follow up on the Vice Chair's question. When you took the test itself, could you tell at that point that there was something wrong with the test, or was it only after when they wouldn't tell you what your score was that you thought something was fishy?

MR. ROBINSON: Well, as I stated, I heard people saying that the test - actually, they told me I was wasting my time because we wouldn't be put into the program. So when I took the test, I studied for it, and I had been working with my children as far as doing homework, math. And presently, I was on a job where I was using numbers, things of that nature, so when I took the test, it was easy. I felt that there was no problem at all.

I took the pretest they had, and I only missed three in the pretest. So I figured if I only missed three in the pretest, I would have no problem with the actual test. But come to find out, I did not qualify. I knew something was wrong because the test that I took, I knew I passed. I figured I didn't have no problem with any of the answers. I completed every part of it, and I think I'm fairly intelligent. And for them to tell me that I didn't qualify out of 117 people was an insult. But then when I looked into this room where they were bringing the people who did qualify, I see nothing but white people. And I just felt that we are not that dumb, excuse my expression, but I know there was some other people who took the test were just as intelligent as I am.

COMMISSIONER ISHIMARU: So you put all the pieces together, and figured out there was a big problem here.

MR. ROBINSON: Yes, I did.

COMMISSIONER ISHIMARU: Okay. Mr. Stern, when we heard about the settlement of the Ford case it made us very excited in my office because it highlights the importance of the less discriminatory alternative. Do you have any recommendations to the Commission on the types of training or resources we could provide our staff nationwide to bring more disparate impact cases involving less discriminatory alternatives? Is there something more we could do?

MR. STERN: Investigation on a testing situation seems to me is very labor-intensive, requires time and patience, requires expert resources from Headquarters. We were very fortunate to have an outstanding expert who was available during the investigation; had to review and make an assessment of the validity studies; then had to review and make an assessment of the state-of-the-art in testing, and determine if less discriminatory alternatives were there. And, finally, had to make an assessment of what the shortfall would be. That was absolutely critical, so we do need continued access to skilled resource expert services.

It may be that for selected industries, one could ask any charging parties who bring charges arising from those locations whether they have been tested either for application or promotion. That may allow us to flag cases that are more appropriate for investigation. I don't think we're going to --I don't think we can just rely on charging parties identifying those issues for us, but I think that's something that could be done without a great deal of expense, or even, necessarily, training, picking out an issue, just ask that question, and follow-up.

COMMISSIONER ISHIMARU: Great. I see my time has expired. I want to thank the panel. I thought it was an excellent presentation, and I join my colleagues in saluting the charging parties for coming forward. It's not easy to do so, and it's not easy to hang in there throughout the whole process, so thank you for coming today. Thank you, Madam Chair.

CHAIR EARP: Thank you.

MR. STERN: Thank you.

CHAIR EARP: Commissioner Griffin?

COMMISSIONER GRIFFIN: I too, want to thank you for coming forward. It's not easy to file a charge, sit through the trial, go the whole distance because we know it literally takes years to get through a process like that. And it's not easy, and it sort of upturns your whole life while you're going through it, and so thanks for doing it because I think, as you said, unless you're willing to come forward and make this change, it changes for no one. So we appreciate your filing the charges and following through.

I want to say hi to Jean. Jean and I worked for Paul Igasaki when he was the Vice Chair here at the EEOC, so it's great to see you.

Jeff, in the settlement agreement in Mr. Robinson's case, you talked about the new selection process was going to be determined by an agreed upon industrial psychologist. Can you tell us what was developed, and how that's working today?

MR. STERN: The jointly selected expert, who you'll hear I believe in the third panel this morning, recently this past spring has done a pilot study for the proposed new selection instrument. The pilot study was favorable, and hopefully this summer, we will have results from a validation study, so that's the next phase. So we've already done the design of the proposed system. It's now been piloted, and will be subject to a validation study. The report will then be disclosed to both class counsel and the Commission, and we will be able to have input and ask any questions we have concerning the validation study.

COMMISSIONER GRIFFIN: Good. Okay. I know that the test they had was technically validated, but shouldn't the Company and the Union, for that matter, seeing exactly what Mr. Robinson saw when they gathered a group of people together that had passed the test, and lo and behold, they were all white, and that they knew there were African Americans that had taken the test. I mean, shouldn't that have been a clue to somebody that something was wrong?

MR. STERN: Certainly, a substantial adverse impact is a red flag that, in our view, certainly by the mid-90s should have precipitated a stop, look, and listen situation to review the validated procedure for currency, and then looking at less discriminatory alternatives. I think most experts would probably agree that a written cognitive test is, and would be expected to have, a greater adverse impact than many other types of tests.

COMMISSIONER GRIFFIN: Okay. Thank you all very much.

CHAIR EARP: I have one question for both charging parties. Ms. Liles, you said when you returned back to work, it was very difficult for you. We are always concerned about reprisal and retaliation after an employee has filed a charge. Given what you went through, and your understanding of our process, what Jean and her staff could do, is there anything that you think EEOC could have done to make your life better after the litigation?

MS. LILES: It was the actual co-workers that when I went back to work treated me very badly. It wasn't the Company at that time, it was the co-workers. They did not want me there, and why, I don't know. But they did, they treated me very badly when I went back, so I just didn't feel comfortable with spending that much time out of my day every day on the job with people that did not want me around.

CHAIR EARP: I'm very sorry about that. Mr. Robinson, did you experience a similar change in attitude either with co-workers, or the Union, or management?

MR. ROBINSON: Well, as she stated, a lot of the resentment comes from a lot of the co-workers. Most of the Caucasians or whites, they just felt that we were actually given some favor. They didn't want to accept the fact that, in my opinion, Ford was not going to spend that much money for something that they didn't feel was incorrect. Some people stopped speaking to me, people who were friends of mine. Some of them felt that once I got into the program, I was only in there because of me filing the charges, not that I belonged, but I didn't let that bother me none, because I know what I am, and what I'm about. And I know what I'm capable of, so I didn't let that bother me at all.

I found my name on the bathroom wall. We know how that it is. I wish I was dumb, and somebody gave me $9 million, but that doesn't bother me none. I'm where I'm at because I belong there, and I don't let anybody take that away from me.

CHAIR EARP: Jean, Jeff, at the time, was there any discussion with management about rehabilitating the work environment, or was it a part of the monitoring afterwards? Is there anything at all that we could have done to ensure a little less hostility after the litigation?

MS. KAMP: Actually, one thing we did do in the Dial case is part of the final judgment involved that people like Paula would have Union representation right from the beginning, rather than having to go through a probationary period, which is why she was able to file her grievance, and did, in fact, win that grievance, so we can do things like that. What we can't do, I'm afraid, and what we, unfortunately, always tell people that come to us is we can't make people be nice to you. We can't do it.


MR. STERN: Madam Chair, we did include in the settlement, which was hammered out during conciliation, so we had good faith by all parties. The case was not settled after a contested litigation, so it was resolved during the charge process. And that, I think, is a very positive way to proceed. That agreement, among other provisions, has an express non-retaliation agreement, and an extensive monitoring period during the term of the agreement. There was no retaliation issue that we made any determination on, and that was not something that we worked with.

CHAIR EARP: Okay. Well, thank you both, all of you very much.

MR. ROBINSON: Thank you.

MS. LILES: Thank you.

CHAIR EARP: In the interest of making sure we hear from as many stakeholders as possible, we actually have two panels that represent stakeholders and different perspectives. So let me ask Panel 2-A to come forward. Mr. Willner, we'll start with you, and I'll ask each of the panelists to just give your name and who you're with, and then proceed with your remarks.

MR. WILLNER: Thank you, and good morning, Madam Chair, and all the distinguished Commissioners. My name is Ken Willner. I'm with the firm of Paul, Hastings, Janofsky, and Walker, and I've represented employers in EEO issues, including pre-employment testing issues for 20 years. You want to hear everyone's names first, or shall I just proceed?

CHAIR EARP: Just proceed. Thank you.

MR. WILLNER: I did represent Ford Motor Company in the case that has been described this morning. I'd like to talk briefly about testing, in general, and testing litigation also very briefly. And, finally, if time permits, to talk about the Ford case a little bit.

Pre-employment testing is an area or a practice that is, and can be good for an employer, good for public policy, and also good for employees. It's good for employers because when a test is valid, it is scientifically shown to result in the selection of people who are more likely to do well on the job. It's also good for employers, and many employers choose to use testing because it is an objective measure, and employers have not been deaf for the last 15 or 20 years or so when they have been hearing from the EEOC and from other enforcement agencies, and the case law, that the use of subjective criteria is, shall we say, frowned upon. So employers look for objective measures where they can, and tests are nothing, if not objective measures.

Testing can also be good for public policy for the same reason, because it is an objective measure, and it does enable employers to get away from subjective decision-making processes. Testing can also be good for employees, because for the same reasons, it selects people who are going to succeed in their jobs. And, also, it is a way that employers can eliminate favoritism. We’ve found in our dealings with unions, for example, that unions are not opposed to testing because it's clear why someone is selected or why they are not. They either passed the test or they did not, and it's not because if someone is related to someone, or something like that. So there are some real benefits to testing.

There are also some downsides. For employers, the downside is that creating and validating a test can be very expensive. It can cost hundreds of thousands of dollars, and it can take years to do. Also, it can be less flexible than other methods of selection, so those are the downsides for employers.

On the public policy side, some tests have adverse impact, and that can be a downside from public policy. However, under Griggs and other authority, where a test is valid, and it actually predicts job performance, that adverse impact is, as a matter of law, not an issue, provided that the test is validated in a proper way, and alternatives are properly considered.

It's also, I think, something worth consideration as to whether even tests which have adverse impact have substantially less adverse impact than other means of selection, such as the subjective decision making processes which are frequently the alternative that's out there.

Testing litigation and enforcement is something that we've heard about before today, and there are a number of issues that come up frequently in litigation. For example, there's a fairly small cadre of counsel, and judges, and experts who are conversant in the subject, and we tend to run into each other over and over again. But getting outside of that group, we find there's a lot of misunderstanding of what the standards are. And the standards themselves, which are the Uniform Guidelines, are not overly helpful in that regard.


MR. WILLNER: They’re about 30 years old at this point, and in our opinion, in need of some updating to be consistent with the standards that are professionally accepted within the field.

One thing that many employers who are involved in testing perceive is that there is a sometimes knee-jerk reaction by enforcement agencies to testing where there is adverse impact. And that when a validation study has been done, that is viewed as merely something to be overcome in litigation, as opposed to a recognition of the employer's good faith efforts to comply with the law, and to come up with a testing device, which is going to get people who will succeed at the job with a minimum possible adverse impact.

And I think it's important in that connection to bear in mind that where the alternative is subjective decision making, it's not necessarily in the public's interest to drive employers in that direction by proceeding with aggressive enforcement of tests that have been validated, and where there is good professional work that was done. That's not to say there's no reason to pursue the employers that have not validated their tests, or that use them for improper reasons, but there is a perception among employers that any test with adverse impact may be pursued, regardless of whether it's been validated.

With regard to the Robinson case, as Mr. Stern mentioned, that was settled in the conciliation process, and I think this case is a good example of where employers and EEOC’s, and employee's interests align, because Ford recognized that its best interest was to have a good test, and a valid test, and a test with a scoring mechanism which was going to get the best people without adverse impact, and so Ford approached EEOC to work out a resolution that was in everyone's interest based upon a new test. And I think that this is a good example of how testing can work to the benefit of everyone. I think that case is one good way in which to accomplish that.

CHAIR EARP: Thank you.

MR. MEHRI: Good morning, Madam Chair and Commissioners. Thank you for having me here today. My name is Cyrus Mehri. I'm at the firm of Mehri & Skalet here in Washington, D.C. My firm’s had the pleasure to represent Mr. Robinson, who was on the prior panel, and charging parties at Alcoa, and a third company that we're going to announce a settlement on in the next couple of months, that we have had the opportunity to really look at this area of testing and apprenticeship selection. And there's a few highlights I'd like to bring to the Commission's attention.

First, the stakes are very, very high. Mr. Robinson talked about a $4 per hour difference between being in an unskilled position, or the skilled positions of millwrights, electricians, and various other positions. The stakes are very, very high economically.

Secondly, that he did not mention, and I want to underscore that the skilled trades have much greater job security. They're the positions that are the least likely to be downsized, and in the capricious environment that there is right now in big companies with downsizing and so forth, it is a particularly powerful reason why people seek these jobs. But we’ve found in our various investigations of different companies that people of color have been all but locked out of those positions. And the root of it is not the fact that there's testing, but the kinds of testing that we saw in our investigations.

First of all, one of the recurring themes that we found was that many of these validation studies, which should be done, had excellent people working on the validation studies, but the companies themselves did not update the studies, they were not reasonably current as required by the Uniform Guidelines. They did not even follow the very specific and detailed instructions from the experts working on them. They would validate the studies conditionally, saying okay, we'll go on board with this, but you have to do A, B, and C. And A, B, and C didn't happen. The funding wasn't provided to follow-up. The specific things that needed to be done to keep this test, to make sure it's done properly, was not followed up. I'm just using generalizations from different investigations. I'm not trying to pin it on any particular company.

The failure to look at alternatives, the failure to really look at less discriminatory alternatives, the trainability kind of alternative that Mr. Stern talked about, was often overlooked. Job simulation alternatives were often overlooked. It was almost exclusively hanging onto the paper and pencil test. In one instance, we even saw the Bennett Mechanical Test, which was a focus of the Griggs case, still being used today, which has really caught our attention. So we think that testing can be done, should be done, and there's no problem with it, as long as the companies make the effort to use state-of-the-art tests, and really follow-up on what their experts are asking them to do.

The final point I wanted to say is that I believe that this is an area that the Commission can have a great impact. First of all, the Ford case is a great example. We had, I believe, an enlightened company on the other side that was very proactive when we met with them. On the verge of them trying to institute the test again, they said look, we want to work to solve the problem. EEOC had a great process in terms of conciliation, and Jeff Stern really did an outstanding job. I think it was all combined, it was a great example how we can work together to get really historic result. I mean, I don't think there's ever been 270 positions created in an apprenticeship settlement like this.

But the key for the Commission to be successful is you have to have experts in-house who can study this. It's not as labor-intensive, I think, as it might be for other areas, because really, all you need is the data, and an expert, and you'll be able to flag whether or not there is an issue here that needs to be followed-up on.

Now my written testimony talked about other recommendations that really try to build on Commissioner Silverman's task force on systemic enforcement. The other areas to build on, and Mr. Stern, I think, touched upon this, is early detection by - I would add in the questions that are routinely done on intake, whether or not people have taken a test. It could be for salary positions, doesn't have to be only for hourly workers, but that should be routinely in the Q&A that happens on intake. And then working with the technology provisions that are talked about in that task force report, enhancing that. I think the Commission, with a combination of having in-house experts, and making this a focal point of the intake, making this - improving the technology and communication around the country could have a great impact on this.

And then, finally, I think I would use the Ford example as a good model of kind of the strategic alliance between class counsel, like my firm, and our co-counsel, and the Commission. I think combined, we produce great results, and I think we can continue to do that.

CHAIR EARP: Thank you. Rae?

MS. VANN: Good morning, Madam Chair, Madam Vice Chair, Commissioners Ishimaru, Griffin, and colleagues. My name is Rae Vann. I'm General Counsel of the Equal Employment Advisory Council here in Washington, and I am delighted to appear before you this morning to discuss EEAC's perspectives on employment testing, and other selection procedures.

As both my colleagues, Mr. Willner and Mr. Mehri have mentioned, when administered properly, employment testing can be a very important tool that is effective in identifying the most qualified candidates for positions, making sure that the folks, candidates, who apply for positions have the skills and abilities that are needed to perform a particular job.

EEAC commends the Commission's efforts to explore this area. And, again, as my colleagues indicated, there is a need to develop an expertise and some best practices around employment testing and selection procedures. And as my written testimony indicates, we have some specific recommendations in that regard, but I'd like to just cover some background before I get to those recommendations.

As many of you may be aware, EEAC is comprised of over 300 of the nation's largest private sector employers. EEAC member companies are all subject to the Uniform Guidelines, and approximately 80 percent of our membership is comprised of federal government contractors, so EEAC members are very familiar with the requirements of the Uniform Guidelines, and the need to ensure the testing and other selection procedures are properly validated, are job-related, and consistent with business necessity.

They also recognize the risks, obviously, associated with testing procedures that are poorly designed, or improperly administered, including loss of efficiency and productivity, increased administrative costs, as well as exposure to large-scale systemic and class action litigation.

Accordingly, EEAC member companies certainly strive to ensure that employment tests and other selection procedures are administered in a manner that is consistent, fair, and non-discriminatory.

My testimony this morning will focus on why employment tests generally are used, as well as common challenges that employers face relating to employment testing. In addition, I'll offer some recommendations, as I indicated that we believe will advance the Commission's aim of ensuring employment tests are administered fairly, and in a non-discriminatory manner.

Why do employers test? As was mentioned earlier, employers utilize employment tests to evaluate job candidates to determine, based on objective criteria, as Mr. Willner indicated, their suitability for employment. Examples of some tests that are commonly used in the employment context include those that measure language skills, reading comprehension, verbal and/or mathematical reasoning, physical abilities, and personality characteristics.

In addition, as was mentioned earlier, many employers conduct routine background checks as part of their employee selection process in order to avoid making bad hiring decisions that could either harm business operations, or adversely affect employees or customers.

Sometimes these backgrounds checks include or reveal information pertaining to criminal conviction records. EEAC members clearly are cognizant of the dangers of relying on criminal background checks where there's no connection to the job that's being performed, or applying blanket rules that categorically exclude from employment those with prior criminal convictions, so they're pretty careful to rely on that information only to the extent that it is relevant to the position for which the candidate has applied, and only in so far as is legally permitted.

As I describe in my written testimony, one EEAC member company reports that whether or not that company will rely on criminal background checks ultimately in excluding a candidate for employment will depend on an assessment of a number of factors, including the number, type and dates of convictions, as well as, again, any applicable state or federal laws that either restrict their ability to rely on this information, or in some instances require that the information be obtained and considered in the employment decision.

What are some of the practical challenges associated with testing? As was mentioned before, testing is coming under increased scrutiny by the enforcement agencies, and EEAC member companies are particularly concerned about doing the right thing, and doing a good job in so far as validating their tests, and so forth.

One particular challenge that our members have reported is dealing with enforcement staff that may not have the level of expertise that's required in order to really investigate and assess the extent to which there might be problems with a test in the way it's administered, or in so far as whether or not it's been validated properly. So that is probably the major challenge that employers have reported to us facing with respect to employment testing. And I'll stop now, as I see that my time has expired.

CHAIR EARP: Thank you. Hi, Adam.

MR. KLEIN: Yes. Hi, Madam Chair and Commissioners. Thank you for having me speak this morning. My discussion will be focused on the use of credit scores and criminal histories in terms of suitability for employment.

The topic itself is, I think, timely. It’s clear that the use of credit history or credit score and criminal history is becoming more prevalent in the U.S. workforce. Statistics show that in the retail sector, the use of credit scores, for example, borders around 40 percent, meaning that 40 percent of U.S. retailer employers use credit history, credit score as a factor in determining employment suitability.

The reason for it, it seems, is clear. The use of this information, frankly, is cheap and easy to obtain. That's, frankly, something that is part of our modern society. The use of electronic data has become sort of a commodity that any employer can gain ready access to.

What's interesting about the use of credit history or credit score in this context is that it was never intended for that purpose. The use of credit history or credit score was used to determine suitability for the extension of credit, not for determining employment suitability, and so any of the studies that you see that shows what the default rates are, or what a proper interest rate should be charged to a consumer, has nothing whatsoever to do with employment suitability.

And the problem, of course, is that the use of credit score has adverse impact. In fact, there's a study by Freddie Mac from 2000 that shows there's roughly a 2-1 impact between whites and African Americans in terms of poor credit. So you have criteria being used, credit score, that was never intended for the stated purpose of screening out applicants for employment, and adverse impact of almost 2-1. It also, likely, has adverse impact for disabled applicants and for others.

Just to make the point, there are people who have no credit score. They've not sought a credit card application; have not taken out a loan. They're new to this country, so they have no credit history or credit score, literally has nothing whatsoever to do with suitability of employment. Yet, it's being routinely used throughout the work force.

Now there is literally, to my knowledge, no correlation between credit score, credit history, and job performance. There's not been any study, to my knowledge, that demonstrates a good credit score with better performance, or a bad credit score with poor performance. There's simply no science supporting the idea that the use of credit score is a good predictor, or any predictor of job performance, or any other characteristic that has anything to do with the employment arena.

The one articulated defense that we've seen in filing charges and working with EEOC on this issue is the threat of stealing, propensity to steal is the articulated defense, and it generally comes up in the context of sort of the para-dramatic bank teller. A situation where an employee has access to money, and so the theory is that an employee with access to money who has poor credit, perhaps, would think to steal the money to address their own personal circumstance. And I suggest to you that while that may seem plausible, it's simply not validated. There's no evidence, no science to suggest that one's credit has anything at all to do with propensity to steal.

The irony, of course, is that if that person had filed for bankruptcy, it would be unlawful to discriminate based on bankruptcy filing in terms of extending a job offer to that person, but merely having poor credit leads to a different result.

The reality is that we've seen, and what's more pernicious, perhaps, is the use of credit score or credit history as a factor in the determination for employment suitability, meaning that employees or applicants, rather, who seek employment literally don't know that credit score and credit history played a part in the decision by the employer not to extend an offer. The reality is most applicants don't know why they were not selected for employment.

Frankly, it's a common experience for people who apply for a job not to be hired. There's nothing necessarily wrong with that. There's no indicia of discrimination because of that fact alone, and, obviously, they have almost no information about what factors were considered when they sought employment and were denied. And so you have sort of a hidden problem, a very clear pattern of using credit score and credit history for employment suitability, almost no information available to the applicant who was denied employment based on that, either in whole or in part, and literally no science, no causation, no correlation between credit score and credit history, and suitability for employment. Thank you.

CHAIR EARP: Thank you. Vice Chair?

VICE CHAIR SILVERMAN: Wow, there's so many questions I have, and so little time. I want to thank all of you for your testimony, and your written testimony. I can't even --I guess one of the questions I have, there seems to be some disagreement among your written testimony of this panel and other panels about whether or not we need to update the Uniform Guidelines. Could you please just talk briefly to that? I'd like to hear from everybody.

MR. MEHRI: The experts I've talked to feel that the Uniform Guidelines are very well thought out, very well developed, and still very pertinent today. I think, personally, from my point of view, I think it would be risky business to get into that, opening up that can of worms. I think that a lot of effort went into that in 1978. The same issues are applicable today, so I would be very cautious about --

VICE CHAIR SILVERMAN: So you think there's enough within there for us to do our job, and for you to do your job.



MR. WILLNER: We do have a different perspective than Mr. Mehri on that subject. The Guidelines were prepared almost 30 years ago based upon the state-of-the-art of the science at the time. They refer to other professional principles that are out there, although they don't incorporate them expressly, such as the Society of Industrial Organizational Psychologists, SIOP principles, and some other guidance, as well. In the intervening 29 years, the science relating to selection devices has advanced substantially, and a lot of those advances are reflected in other professional guidelines that the experts who are preparing the selection devices, and who are representing parties in litigation, then are defending them based upon guidance from - SIOP guidance, for example, that isn't necessarily, and in many cases is not consistent with the Uniform Guidelines. And there's a real tension there in the litigation I've been involved with as between what the guidelines say, and what the professional guidance is, and what the science says.

There are some examples that are listed in the paper that I submitted having to do with, for example, synthetic validity, and validity generalization, and other areas of differences between current thinking and past thinking.

VICE CHAIR SILVERMAN: So your view is that the guidelines don't allow enough for the modern advances. What about you, Ms. Vann?

MS. VANN: It is true that the professional standards are more contemporary, if you will, have been updated more regularly; whereas, the Uniform Guidelines have not. Having said that, the Uniform Guidelines provide the fundamentals for conducting adverse impact analyses, and those fundamental legal principles have not changed, so we would urge the Commission not to over-regulate in this area. And perhaps, better utilize its resources on developing best practices, educating folks on how to apply the Uniform Guidelines, and how to work within the confines of both the legal, and the professional standards.

VICE CHAIR SILVERMAN: Providing clearer guidelines about what it is that we think is legal, and what is not?

MS. VANN: Sure. Certainly, interpretive guidance, some --yes, some interpretive guidance, stopping short of actually going in, because I agree with Mr. Mehri, that that task of going in and trying to amend or revise the Uniform Guidelines could very well open up a huge can of worms.

VICE CHAIR SILVERMAN: It seems from the testimony I've heard so far, that even where the tests are validated, then there's a question of whether or not the --how updated that is. How often should an employer be expected to update a validation, or does that depend on --

MR. MEHRI: The Uniform Guidelines have a phrase in there about that they should be reasonably current. The experts I've talked to said that they look at it as no more than five years as reasonably current.

VICE CHAIR SILVERMAN: Mr. Klein, you talked a lot about the credit issue. And it was my understanding that under the Fair Credit Reporting Act, which, of course, we don't enforce, and I'm not trying to shirk our duties in any way here, but people will know when their credit is checked.


VICE CHAIR SILVERMAN: Can you talk to us about that?

MR. KLEIN: There's a disclosure requirement with the Fair Credit Reporting Act. There are two problems with it. One, employers don't comply with it, because they don't appreciate that denying employment based on credit comes under that Act. The second is, the disclosure itself is underwhelming. It doesn't provide information that would reasonably lead an applicant to come to the EEOC and think that because of poor credit they were discriminated against because of race or disability status. One doesn't flow from the other.

VICE CHAIR SILVERMAN: It does tell them if an employer is following it, though, they would know that their credit was checked.


VICE CHAIR SILVERMAN: They may not know that they were turned down because of that.

MR. KLEIN: Right.

VICE CHAIR SILVERMAN: And even if they knew that they were turned down because of it, they may not make that leap, is what you're saying.

MR. KLEIN: That's right. There's no reason for one to think that because of their race, they were the victim of discrimination, when they were denied employment based on their credit. That's information that's not typically within the possession of a typical applicant.

VICE CHAIR SILVERMAN: Okay. I see I'm out of time.

CHAIR EARP: Commissioner Ishimaru?

COMMISSIONER ISHIMARU: Thank you, Madam Chair. I, again, want to follow-up the Vice Chair's question on the mechanics of the credit check. Does the employer have to ask permission from the applicant, or is it noticed that they're going to ask?

MR. KLEIN: They need to ask permission. And typically, the way this happens, and I've gone and sought employment applications from various retailers to see how this works. There's a general employment form, may run pages, and at the bottom right above where you sign it, it says, “you are giving permission to Employer X to do a credit check,” so there isn't really any choice. If you would like to seek employment, you have to sign the form. And the form gives you a one-liner. It doesn't explain anything beyond that. And that's essentially how it's done.

COMMISSIONER ISHIMARU: I found your statement, and especially your written statement, to be compelling. And it would strike me that it's almost a per se violation, that the use of credit checks without a validation study, without supporting documentation being used on such a widespread basis, causing a disparate impact, isn't justified. Is there something, or do you have a recommendation for the EEOC as to what we should be doing on this?

MR. KLEIN: I think there are two answers to that question. I do think it's imperative that the EEOC issue guidance on this point. The EEOC years ago issued guidance on the use of criminal conviction or criminal records in terms of employment suitability, and it has an impact. But here, there really isn't any guidance. And to be candid, I'm not sure where the idea of using credit scores for employment suitability determination came from. Where this idea came from is something that employers, perhaps, can answer. And, also, the EEOC should enforce Title VII. It should aggressively pursue charges that are filed, or issue Commissioner's charges where credit is being used.

You know, there's a simple way to find out whether employers are using credit in determining suitability. If you look at the employment applications, they'll give a Fair Credit Reporting disclosure. Well, why would they be asking for that information if they're not using it? It wouldn't be very difficult to figure that out.

VICE CHAIR SILVERMAN: Is that the only disclosure? I'm sorry? Is it the bottom of the application, is that it in complying with the Fair Credit Reporting?

MR. KLEIN: Well, there's just a requirement that the applicant be told that their credit is being checked, and that they agree to that. And a lot of times, it's on the employment application itself.

COMMISSIONER ISHIMARU: Or there's a separate form. I remember filling out a separate form and just signing, and say go ahead and check.

MR. KLEIN: Right.

COMMISSIONER ISHIMARU: Sure enough happened. Are there alternatives that employers can use? In litigation you've brought, this is quick, it's easy, it's cheap.

MR. KLEIN: Right, quick, easy, and cheap.

COMMISSIONER ISHIMARU: Are there other things employers can do that are similar, quick, easy, and cheap, that would be more helpful to them that you've come up with in your various pieces of litigation?

MR. KLEIN: The short answer is no, but I could see how that could happen. The reality is, what are they asking for, what are they looking for? They're looking for whether there's some characteristic or behavior of the applicant that they feel has some relation to the performance of the job, so propensity to steal is an example. Well, if they have a criminal conviction of theft, that would, seemingly, be more relevant. So the question is, what is it that's causing the poor credit? Was it disability status, was it bankruptcy filing, was it just a lack of credit all together, meaning they're new to this country and have not established credit? What is it that's causing the credit score to go down, or not to be substantial? And I would suggest that I think it would be very difficult for employers to even use that as a criteria for eligibility for employment. Again, it doesn't seemingly have much to do, or anything to do with employment suitability for most jobs in the U.S. workforce.

COMMISSIONER ISHIMARU: Mr. Mehri, in your litigation over the years, how typical is it to come up with an alternative selection device that both reduces adverse impact, and meets the employer's need of needing some sort of selection device that is required for their business?

MR. MEHRI: It's a very achievable goal. You'll have your last panel today, you'll hear from Dr. Lundquist and Dr. Outtz. They'll be able to address that. I've seen them do this work in the past. I've seen other experts do it. It can be done. It's not a pie-in-the-sky thing. It's a very achievable goal, and I think there is a lot of new literature, a lot of new mechanisms that have come up with job simulation approaches, trainability approaches, conscientious - testing for conscientiousness doesn't have adverse impact. There are a lot of good tools that are out there that can be used.

I haven't seen anyone completely jettison the paper and pencil test, but they've supplemented it with many other things that are more job-related, and reduce adverse impact.

COMMISSIONER ISHIMARU: And I would imagine in the cases you've seen, that businesses that may not have the counsel of Mr. Willner, may pull a test off the internet, may get something off the shelf that may not be related to the job in question. How often do you see that happening, where someone pulls a test, and this measures - this was designed for Purpose X, but they're using it for Purpose Y?

MR. MEHRI: We did see that. Like I said, the Bennett Mechanical Test you can actually get that on line. And one of the things that reminds me of, is that sometimes the questions are so available that there's a security issue, as well. And one of the things about having an individualized test is you have less risk that the applicant, or that the workers applying for apprenticeship programs, much less risk that they'll know the questions in advance.

COMMISSIONER ISHIMARU: Okay. Thank you. Madam Chair, I hope, like the Vice Chair, that we'll have more time to talk about this at another forum, because this is a fascinating panel, and maybe we can get folks to come back.

CHAIR EARP: It is, I agree. Commissioner Griffin, before I give the floor to you - Mr. Mehri, did I understand you to say that testing for conscientiousness would be a reasonable alternative to some of the tests currently available?

MR. MEHRI: As part of the selection procedure, yes. In other words, there's multiple components to a test, and that kind of testing, I understand from talking to experts, does not have as much adverse impact as other - in fact, doesn't have adverse impact, is my understanding.

CHAIR EARP: Really? That's interesting, because I would think that testing for something like conscientiousness might have a cultural component that would, for some groups, have an adverse impact.

MR. MEHRI: Well, I will defer to Panel III on that.


MR. MEHRI: Because I think they can address that, but that's my understanding.

CHAIR EARP: Okay. Good enough. Sorry. Commissioner Griffin.

COMMISSIONER GRIFFIN: Should we wait until Panel III to ask them what that actually is?


COMMISSIONER GRIFFIN: There's a couple of us up here going what is it? I’ve got to wait until then?

MR. MEHRI: I would wait for them.

COMMISSIONER GRIFFIN: Well, I want to thank you all. I want to really thank Adam Klein for bringing up the issue of no credit history, because this is a huge problem for, especially people with disabilities, and I'm sure people that are new to this country. People that are living, and you can't make a living on benefits, but people who are living on Social Security benefits are routinely denied credit, because of their status. And so when they do go to get a job or anything like that, that very fact is a barrier. And, again, you don't know, even though you sign that thing.

I like your idea about going out and collecting applications, and taking a look at them. Are you aware of any court cases where credit checks have actually been upheld as a valid selection criteria?

MR. KLEIN: No. In fact, the truth is there's not been much litigation on this topic, generally. I think it's a relatively new phenomenon. We’ve filed a number of class charges with EEOC. The EEOC is actively pursuing them. And frankly, the stories are compelling. They make no sense, literally, when you hear the facts, hear the stories of why people were denied employment, you're dumbfounded by the explanation. But having said that, there's not been litigation on this topic, to my knowledge.

COMMISSIONER GRIFFIN: Okay. Mr. Mehri, when you --in your statement you make several suggestions about steps that we can take to identify situations where paper and pencil tests are used to exclude minorities, and other applicants. Are there any specific industries that you think have a problem with the improper use of tests?

MR. MEHRI: Well, some industries don't have paper and pencil tests at all, so I think the question is really what industry - I would rephrase the question a little bit in terms of what industries are using them; and, therefore, more likely to have issues come up. The automotive industry, I think the aerospace industry, the telecommunications industry, there are a number of industries which have these kind of tests. And as I was trying to say earlier, I wouldn't say having a test or a selection procedure written form is per se a problem.


MR. MEHRI: I think the question is getting the data --

COMMISSIONER GRIFFIN: Having your tests, and then looking at the data.

MR. MEHRI: And having your in-house experts that we're fortunate to have both in Alcoa and Ford, that the EEOC had in-house experts, industrial psychologists, who also had statistical backgrounds, who looked at it. And my concern is, is that whether or not that area within the Commission has been staffed fully, because I understand there's been some turnover there. One person left, I think, on disability, another person retired, so that's my number one recommendation to you, is to go back and visit that area, and make sure you have industrial psychologists, the statistician-type of experts in-house for both what I was talking about, and what Mr. Klein’s talking about. Ultimately, you need to have those kind of skill sets within the Commission to effectively enforce this.

COMMISSIONER GRIFFIN: And is it just paper and pencil, or are there other tests that you would say here's an industry using this type of test, and here's another suggestion, look at their data.

MR. MEHRI: The ones that we've looked at have been paper and pencil, but I would --and we've done it mostly for hourly workers, but there are other contexts, and some - in telecommunications industry you can have hourly workers who are trying to go from hourly to management, as the cases I highlighted today were going from unskilled to skilled positions. But they're also contexts of going from hourly to management, is one. I have not seen much of it in the salaried workforce, going from, let's say, one pay grade to another pay grade within salaried. I have not seen that. I have seen it hourly to management, or unskilled to skilled.

COMMISSIONER GRIFFIN: Okay. Thank you. I have like a second left on my yellow light. I've got a minute. Mr. Willner, you talk about updating the guidelines to reflect new research showing that construct validity and associated approaches are now fully developed and useful. What are the associated approaches that you talk about?

MR. WILLNER: For example, there are a variety of different sub-headings, I suppose, but the research since the guidelines have addressed, for example, looking at validity in multiple locations, or multiple instances, and dealing with whether validity has to be location-specific, or necessarily employer-specific, sometimes you have small samples, small populations with one location or one employer, and sometimes you can get a much better view, actually, of validity by looking at a larger sample that might require looking at multiple locations, or multiple employers. You can get a better sense for how valid a test actually is with a larger population. And that's one of the things that could be improved about the guidelines, is their emphasis on local validation. And they give less flexibility in how tests are validated, as compared to what the current professional guidance is.

COMMISSIONER GRIFFIN: Mr. Mehri, do you want to add anything to that?

MR. MEHRI: The issue, at least the way I understood it, that Mr. Willner was talking about, about applicability, or generalization of testing from one location to another is, again, an area where experts have different opinions on. What I've seen is a great deal of caution from experts about unduly generalizing from one context to another, and so the best practice is to actually do it for the location, or for the use it's going to be deployed for in a company.

COMMISSIONER GRIFFIN: Okay. Thank you all very much.

CHAIR EARP: Ms. Vann, you mentioned regarding arrest and conviction records, the companies that are part of EEAC's awareness of number, type, and date, but does EEAC have a position on the use of credit checks?

MS. VANN: In speaking with many of our member company representatives, it became pretty apparent to me that the practice, the use of credit checks as a screening tool is not a widespread practice among EEAC member companies, for many of the reasons that Mr. Klein discussed. It's very difficult to come up with a job-relatedness argument based simply on a credit check, and I think that has been one of the concerns that's been expressed to us by some member companies, in so far as justifying their hesitancy, or their tendency not to go there. Really, it's a job-relatedness issue, and whether or not you can actually validate that sort of thing.

CHAIR EARP: Mr. Klein, do you find the use of credit checks more prevalent in one industry over another?

MR. KLEIN: It seems to focus on some relationship to financial matters. We have a charge before the Commission now, Lisa Bailey, the charging party, against Harvard University. She was in the on-line development office, she was a temp employee for five months, and they asked her to apply for the position full-time, did a credit check and determined that she was not suitable for the job she had been performing for five months. So that's a typical fact pattern. In fact, any time I have a discussion about this topic, it's usually a question of dealing with money or some aspect of propensity to steal as the articulated reason that credit checks are used. And I don't see how that's been validated. There's been no evidence, to my knowledge, that suggests that it has.

CHAIR EARP: Thank you very much to the panelists.

MR. KLEIN: Thank you.

MR. MEHRI: Thank you.

CHAIR EARP: Before we invite Panel II-A up, we're going to take a 10-minute break.

(Whereupon, a short recess was taken)

CHAIR EARP: We'd like to reconvene, if we can, please. Continuing with the perspective of stakeholders, Panel IIB has been seated. Ms. Arent, we'll start with you, have you introduce yourself and give your remarks.

MS. ARENT: Good morning, Madam Chair, Vice Chair, and Commissioners. I'm honored to have the opportunity to speak to you about the impact of employer testing and screening on people with disabilities. My name is Shereen Arent. I'm the Managing Director of Legal Advocacy at the American Diabetes Association. And though I'll focus on examples concerning diabetes, what I'm saying will be generally applicable to the disability community.

There's two striking differences about how testing and screening impact people with disabilities, compared to what we've been hearing about today. The first is that much of the discrimination that people with disabilities face is explicit. The person is told because of this medical condition, you cannot have this job.

The other thing that's striking is how very few resources are used to justify that explicit discrimination. Rather, people with disabilities are denied employment based on a mish-mash of core science and extraordinarily brief, if any, assessment of how that disability affects that person for that job.

I'm going to focus first on the pre-employment physicals, and when safety concerns are raised. That's supposed to be an individual assessment relying on the most current medical knowledge, and the best available objective evidence. What we find, instead, are things that range from de jure, blanket bans. That's what Jeff Kapche faced when he sought to be a police officer in San Antonio and was told you use insulin to control your diabetes, you need not apply.

I'm hoping that is somewhat in the past, but not really, those things still exist. Yesterday's Boston Globe talks about a Massachusetts regulation which prohibits anyone with an insulin pump from obtaining any job in law enforcement. Beyond the de jure, blanket bans, we find de facto blanket bans. That means, the medical science and the assessment is so slip-shod that anyone with that medical condition is not going to make it through the individual assessment. That's what happened to Gary Branham when he sought a law enforcement position in the IRS. There was simply no one who understood diabetes making that assessment, no one with diabetes would have made it through that individual assessment.

The lack of scientific basis is widespread. Gilberto Wise was working successfully in the United States Marshall Service until he came up with a hemoglobin A1C test of over 8. While I don't have time to explain what that test is, medical science people who understand diabetes will tell you that that doesn't have anything to do with whether he could safely do the job, but employers often want that clear cut-off, one test, the person is or isn't safe. It just isn't medically valid.

And we even get to the absurd. Rudy Rodriguez wanted a job in a factory. He’d been doing it successfully. Based on a urine test, a test which was antiquated decades ago, the doctor who had been contracted with ConAgra said, “There's no job this man can safely do, even in a padded room,” based on a test which is, frankly, laughable within the diabetes community.

What all of these folks have in common is that they were evaluated without current medical knowledge, and no looking at what that person could do in that job. What's also common among these people is that they successfully fought back, and were eventually determined to be able to do these jobs. But what we had are years, sometimes up to a decade of expensive litigation.

The hallmark of successful individual evaluation is individual evaluation for the job and the person, bearing in mind that most jobs don't have a safety component. The expertise of occupational medicines and experts in the relevant medical area working together, and including the expertise of the treating physician, and then there often is no easy one-step test, which employers often want to find. The solution, then, is a collaboration between occupational medicine experts, experts in the employment field, and advocates to develop protocols which can be put in place ahead of time, rather than all of this expensive and time consuming litigation, that can easily be implemented, and they really are not that difficult.

I’ve given several examples in my written testimony, some which come as a result of litigation, one which has happened even since I wrote my testimony, is a collaboration between the American Diabetes Association and the American College of Occupational Environmental Medicine. We've worked together to come up with a protocol for people with diabetes in law enforcement which employers can use ahead of time, rather than be involved in litigation.

There are a number of other tests, some of which have been talked about before, which have an adverse impact on people with disabilities. We spoke earlier about the Minnesota Multiphasic Personality Inventory; psychological testing often ferrets out psychological and mental disabilities at a pre-offer stage when that is not --when it's simply not available to the employer, but yet these questions are being asked.

A final barrier I'd like to mention, and we talked about that with Jean Kamp and Paula Liles's case, are screening devices which impact people with disabilities because they look at physical prowess, strength, agility, have a great impact on people with disabilities, and often are not job-related or consistent with business necessity.

There are a number of proposals for EEOC action that I have set out. The first is really redoubling the Commission's efforts to educate employers about their obligations to individually assess people with disabilities, and to provide guidance on how to construct protocols for individual assessment.

I also have some other, and I see I'm out of time, but a few other recommendations. I do think that factual research and systemic litigation in the area focused on, which is the employment physical, and how that would be very useful. And last, I would strongly urge the Commission to become actively involved when other federal agencies are involved in setting employment standards. An example that's very current is right now the Federal Motor Carrier Safety Administration is working with the Medical Review Board to review 15 different medical conditions, and the employment of people with various disabilities in commercial driving. I think it's urgent that the Commission become involved in that, and help the Federal Motor Carrier Safety Administration to look beyond meta analysis and group-based studies to the individual assessment and medical expertise of the community, so that the standards that the Federal Motor Carrier Safety Administration ultimately comes up with are ones which are based upon the goals of the ADA to avoid stereotyping and to look at individual assessment of people with disabilities. Thank you very much.

CHAIR EARP: Thank you. Mr. Ashe.

MR. ASHE: Good morning. I am Lawrence Ashe of Ashe, Rafuse & Hill in Atlanta, Georgia. I listened with interest to my friend Cyrus Mehri's approach and views on some things, and I'd like to provide the other side of several of them.


MR. ASHE: First, let me note that we agree on the fact that the EEOC could make good use of additional experts in industrial psychology on its staff. And I would like to remind the Commission of what is arguably its least used provision in Title VII, Section 705(g)3. "The Commission shall have power to furnish to persons subject to this sub-chapter such technical assistance as they may request to further their compliance with this sub-chapter or an order issued thereunder."

The times in which I've written over the years to the Commission seeking such technical assistance, I either get no response or one which says we don't have funding for that, write your member of Congress. Nonetheless, in terms of allocation of your resources, I think it's an ounce of prevention versus pound of cure, pound of cure is litigation category, and it could be put to good use.

The OFCCP, for example, when there are tests used on a large-scale basis by members of a particular industry, for instance, the electric utility industry uses Edison Electric Institute's massively validated test, and has reviewed them, and approved the validation, not the use by a particular member, but approved the validation nationally so they don't have to keep doing it over, and over, and over again. They issued a directive to that effect, which I could share with the Commission, if you're interested.

I would --also, OFCCP has a, obviously, heightened interest in testing these days. I've seen more of those cases in the last two years than I've seen in the preceding 10 or 15. And I would suggest coordination between the two of you could be beneficial. Justice probably has enough on its plate these days, so that --


MR. ASHE: One of Cyrus's views was that - he said, "The experts I've talked to say that the guidelines are just fine and totally current." I'd like to know who they are. They aren't either Jim Outtz or Kathleen Lundquist, who will testify here shortly. I wrote Jim a note, and I got his permission to read this. "Jim, do you think" - and he's primarily a plaintiff's expert. "Do you think the Uniform Guidelines are current professionally today?” “Some ways yes, some ways no.”

“So should be reviewed and updated, where appropriate?”


Start with the fact that the guidelines adopted the APA standards in effect in 1978. Those have been revised twice, quite substantially. As far as I know, you haven’t issued any guidance as to whether that means it's the pre-`78 APA standards that are incorporated, or do we sort of infer that it's each successive version of it, even though you hadn't reviewed them officially.

In addition, SIOP did not have principles at that time, and they are the most relevant. I think most would agree they are the single most relevant document for interpretation of professional standards in that area, but that's just an easy example. Mr. Willner gave some others.

I don't know of a single Ph.D. industrial psychologist who thinks that they are fully current. Now what changes ought to be made, you can get some argument about. And, indeed, politically I don't think it's likely to happen in the next 21 months, anyway. I would note for the record that we've done reading level difficulty analyses of the Uniform Guidelines. They come out at the 21st grade level - excuse me - the 19th grade level, which, coincidentally, is college plus law school or Ph.D. They are an improvement over the preceding EEOC guidelines, which came out at the 25th grade level, so that is progress.

I have listed some thoughts on performance appraisals, which are the most used test, as far as I know. They are a test in the guidelines in my written remarks.

On a job analysis, that would not be for low population jobs, that you can only practically, economically do job analyses for larger population jobs. And the impact, they should be audited for impact. My experience has been they almost always, in large data samples, have impact against minorities, sometimes against females, sometimes there's a different distribution for females. I've seen situations with large data samples where there's not average adverse impact, average rating impact, but there's a clustering of the female ratings towards the middle, where the males are more likely to get rated either awful or superstar. That has implications for rapid advancement, or rapid termination, as the case may be.

In the age area, and, of course, we now have adverse impact there, my experience has been that the impact starts occurring around age 47 or 48 and then tends to evaporate around age 60. I don't know whether that's because most people take early retirement or there's not enough data set there or not.

I see the light’s on, but I'll save the rest of my time for rebuttal. Thank you.


MR. ALVAREZ: Well, you're not going to get argument from me.


MR. ALVAREZ: I'm Fred Alvarez, Wilson, Sonsini in Palo Alto. It's a treat to return to these hallowed halls, though, I'll tell you that. And I salute you for digging into this very complicated issue. I guess my principal feeling is that I'm relieved that you have to make sense of this, and I don't.


MR. ALVAREZ: My basic assignment was to give you, at least, my perspective on the practical advice that's available to employers who are considering testing, and to suggest something about best practices. I've submitted a statement, which I know you've looked at, so I won't repeat that.

My message, I think, is a simple one. It's that employers who are contemplating using objective skill testing are facing what I describe as a rock and three hard places. The rock is the guidelines, which you don't need to hear anything more from me about. The hard places are really more of a continuum, and I would encourage you to look at it as a continuum. At one end of the continuum is using an armor-plated state-of-the-art, super-validated test, and then hoping that another expert doesn't come along in litigation who will find the seams in the armor, or think of another alternative, which is what these cases tend to be. That's at one end of the continuum.

At the other end of the continuum is to do nothing at all, just to make purely hiring on a hunch kind of employment decisions. And I know there are cases that attack that, but I'm not saying - I don't think that's a very good place either. We know what happens when there's excessive subjectivity. It's a place for preconceptions and stereotypes.

I'm suggesting to you that there is a murky middle, in which employers seek to use objective skill-based tests to make objective decisions. It's a kind of a hazy place, because there's no regs that tell you how to do that. Oftentimes, they're a piece of the job-type tests, and the employers have to sort of rely on their ability to prove that it's job-related for the position in question and consistent with business necessity, which is a question of fact.

My message to you is I think that people in that middle space could use your help in getting some guidance as to how to do that, if you're not at both ends of the spectrum, either with an armor-plated validity study, or purely subjective hiring.

I don't want my point to be somehow a criticism of the testing profession. They do great work. They're very valuable. The tests they develop are super useful, but my point is simpler. It's when you think of piece of the job-type testing and validation, why does it have to be so hard? I mean, if you can ask people whether they can type to be a typist, why can't you give them a typing test? If you can ask them whether they can drive before you hire them to drive a truck, why can't you give them a driving test? If you can look at a writing sample for a lawyer, why can't you ask him to write a paragraph? I mean, there have to be some species of tests that are more accessible to people.

I understand adverse impact, I understand Griggs. I believe in all of that, but my point is that we're so suspicious of tests that we're making them too hard to use. Think of, we assess motive all the time. You get 80,000 charges a year, you're trying to figure out what the motive of the employer was. We don't have to hire a psychoanalyst to make that decision, we look at common sense. We look at the facts, the circumstances. And I think tests are objective ways to try to find qualified people without regard to race, national origin, disability. We ought to try to find a way to make that process more accessible. We should engage the IO profession to help us come up with easier to do validation for tests that seem to be very distinctly job-related.

I just think that the more feasible alternatives that we have and that the Commission helps us develop to do sort of piece-of-the-job tests will encourage employers to stop hiring on a hunch, and start to be more focused on the requirements of the job. And I'd urge the Commission to help us find alternatives to these two polar extremes, and let people make sensible use of sensible tests. And I think there's ways that even --and have it be done in ways that even I could understand, not having a Ph.D. And I think the Commission could really help with that process. So that's my message to you, and I appreciate the opportunity to be here.

CHAIR EARP: Thank you. Vice Chair.

VICE CHAIR SILVERMAN: Mr. Alvarez, in your written testimony, you suggested that an employer should be permitted to develop job-related tests or purchase professionally developed off-the-shelf tests without specific validation, and to defend those tests under Section 703(h) of Title VII. How does 703(h), we didn't really talk about that much today, work with Section 703(k), the disparate impact section? Where there's adverse impact, does an employer still have to show job-relatedness and business necessity, and could a plaintiff still prevail by showing a less discriminatory alternative?

MR. ALVAREZ: I think the answer to that all those is yes. I mean, the adverse impact standard totally applies. I have no question with it. My only point in raising 703(h) was, I think there should be some guidance associated with 703(h), because the statute itself contemplates skills testing, as long as the skills test is not used as a subterfuge for discrimination. And I guess the way I put the two together is I'd say if you can put together a factual proof that that job is consistent with the position in question, because it’s a business necessity and job-related to the position in question, you ought to be able to prevail in that case. But that piece of Title VII, which isn't even referenced in the guidelines, hasn't been fully developed, and I just think it's an area in which you might consider giving some guidance as to what a professionally developed skills test might be that would be at some level less than one of these heavily scientific validation studies.

VICE CHAIR SILVERMAN: So what you're suggesting or recommending to the Commission is both that we have to develop some guidance in this area and, also, that you believe the Uniform Guidelines themselves should be updated, as well. We need both, in your opinion?

MR. ALVAREZ: I guess I'd say yes, although I think that the guidance is more important than the new regulations. But the regulations were written a long time ago.


MR. ALVAREZ: And a lot has changed since then.

VICE CHAIR SILVERMAN: Mr. Ashe, based on your experience, do you see differences in how performance appraisals are conducted or used across industry and job categories?

MR. ASHE: Yes. The higher the level of job, to the extent it occurs, the more subjective the performance appraisals tend to be. I do not see uniformity that necessarily larger employers do a better job with performance appraisals than smaller employers, nor do I even see that law firms who have a lot of people practicing employment law necessarily do a good job.

VICE CHAIR SILVERMAN: We instruct. We don't do - employment lawyers.

MR. ASHE: There are still large law firms that, for example, do not give their associates copies of their performance appraisals, which I consider mind boggling. Psychologists will tell you that we tend to hear what we're listening for, we listen in a maximum of 15 second bursts. Also, if you're a partner relying upon some hardworking associate to prop you up, even though they may have need for improvement, you're going to sugar coat it if you're giving it to him orally in ordinary human events. Also, very few law firms give training to the partners in doing performance ratings, so things like recency tend to swamp the ratings. In other words, what did you do right or wrong the last three weeks?

VICE CHAIR SILVERMAN: Ms. Arent, again, I'm focused on what EEOC can do here as we move forward. We've issued a guidance, as you know, on medical inquiries under the ADA and issued specific guidance with regard to diabetes. Do you feel that these documents go far enough in addressing certain areas, or should our focus be more on outreach?

MS. ARENT: I think both of those guidance are very useful and very helpful. I think where we don't quite --where we could use further guidance is there seems to be a tendency to say you have a doctor involved, for example, and that's enough, and not really going beyond helping the employers to understand that expertise is needed in the medical condition. Again, this is before an explicit disqualification because of a disability, so I think that sort of guidance would be useful.

Another sort of guidance that I think would be useful internally for investigators is to help them also understand what they're looking for, in terms of whether there was truly a valid medical assessment made. I think both employers and internal EEOC could use that. I also think that guidance for employers on how to construct valid tests - there's been a lot of discussion about the sort of complex validity studies that are done in terms of testing under Title VII, and I think employers really don't have ideas on how to set up a protocol which would really do that kind of evaluation, which I think would then be valid, so they go with what's the quick cut-off, what's a quick fix? Does this person have an M.D.? That's enough, move on. And I think that would be useful from the EEOC.


CHAIR EARP: Commissioner.

COMMISSIONER ISHIMARU: Thank you, Madam Chair. I want to welcome Secretary Alvarez back to Washington from the Promised Land in California, and wanted also to note Mr. Ashe, who we do a lot of work with through the ABA, and both Secretary Alvarez and Mr. Ashe have been very helpful to me, personally, and I wanted to welcome you both here.

Mr. Alvarez, you talked about typing tests and driving tests, and I walk away with the impression that employers can't do that because of a problem with the test. Is that true for truck drivers and typists, that they can't --

MR. ALVAREZ: Well, the theory would be that you have a test that could have an adverse impact, and you'll have to validate it. And, presumably, you could hire an expert to do all the study associated with doing it, and another expert to say well, you could do it a different way, and you'd end up with a battle of experts, even on a driving test.

COMMISSIONER ISHIMARU: But has that happened in --

MR. ALVAREZ: I'm not aware of any cases.


MR. ALVAREZ: But I'm just telling you when you counsel an employer who wants to do a test, that's what you've got to tell them; that, theoretically, they'd have to validate that test under the Uniform Guidelines, and any other permutations of the sort of piece-of-the-job test. That's the extreme example, but my point is that even something like that, theoretically, would have to be --it's a selection procedure under the guidelines. Where's the validation study, where's the data, what are the alternatives?

COMMISSIONER ISHIMARU: Yes, yes, but you don't know of instances where --

MR. ALVAREZ: I'm not aware of any.

COMMISSIONER ISHIMARU: --where people have been prohibited from administering a driving test or a typing test for a job that would entail that sort of performance.

MR. ALVAREZ: Not that I'm aware of.

COMMISSIONER ISHIMARU: Okay. Mr. Ashe, could you talk to me some more about performance appraisals and give me some specific examples of how they may come up under an adverse impact theory?

MR. ASHE: You can - even the ones that aren't scored that use adjectives you can assign numerical scores to the various ratings. And you can then see if they have adverse impact against a particular protected group. And if you have a very large sample in the age area, I wouldn't do just under 40 and over 40. For the reasons previously stated, I'd break it down at least into the 40s and 50s, and if you have a very large sample, maybe even five-year increments.

They come up because performance appraisals are typically used to defend differences in promotion rates or differences in pay, and so the plaintiffs or the government does an analysis of pay by gender, age, or race, or whatever, or an analysis of promotion rates, and they discover adverse impact. And the employer says well, we base it on our performance appraisals, that shifts the burden to the employer to defend their performance appraisal system. And I've listed in my written remarks what I regard as some best practices, but I think they're one of the most litigated tests that currently exist today, and they're ones for which there is not a suitable alternative that I'm aware of. And, indeed, although Mr. Mehri assures us that they're a very achievable goal in response to your question, he didn't cite any cases, and I'm unaware of any case law that says the employer should have adopted X instead of the one they're using, if the one they're using was validated. In 40 years, I've never had anybody establish a less adverse, equally useful alternative in a case.

COMMISSIONER ISHIMARU: But could the performance appraisal be, say, structured so the same things are looked at and analyzed?

MR. ASHE: Yes. I mean, the major problem I see with performance appraisals in application is that the raters aren't trained. I mean, that would include, of course, EEO training, and training as I've suggested. It's an annual appraisal, let's update it from the whole year, not just the last three weeks. I mean, when I'm dealing with associates, I keep an annual file, and I put some work of their's that I've edited and keep it in there, and other information, so that when I fill out a performance appraisal, I'm in a position to talk about the whole year and not just what I remember in my increasingly short shelf life.


MR. ASHE: There are clearly best practices, and that's an area - that and job analysis are two areas where the Commission, as far as I know, has never provided any guidance on best practices. And I think it would be useful to the community that you regulate and serve to do so.

COMMISSIONER ISHIMARU: Thank you. If you'll permit me one last comment, Madam Chair. I thought on Ms. Arent's testimony, the continuing exclusion of people with diabetes and people with disabilities from safety-related positions and governmental jobs continues to be troubling. And I don't know how we get to that, we at the EEOC, but people being excluded per se from these jobs. I think we need to think about how we can get to it in a better sense. And I also was very intrigued by the whole MMPI testing issue, and that being a screen to get to mental disability. Again, I hope we have a chance to look at that some more. Thank you, Madam Chair.

CHAIR EARP: Commissioner Griffin.

COMMISSIONER GRIFFIN: I have actually been sitting up here all morning, and actually thinking in my head, does any of this apply to the federal sector, and then Shereen actually testified about the Federal Motor Carrier Safety Administration. So could you tell us a little bit more about what's going on there? Here's a current situation that we probably should have some knowledge about.

MS. ARENT: Yes. As a result of SAFETEA-LU, which is legislation which was passed in 2005, the end of 2005, Federal Motor Carrier was tasked with developing a medical review board, which is to look at 15 different medical conditions, and reassess their standards. As you know, right now, commercial driving ranges from epilepsy, which is a blanket ban, to diabetes, which is a blanket ban on the books, but after years of work we now have an exemption program which puts people through an individual assessment process, to other conditions that probably should be regulated, but which aren't. And so, the medical review board was put together to look at this, mostly made up of occupational medicine doctors. There's four people on that panel, and then they set up diabetes expert panel, and various expert panels.

What my concern is, is that the overall view of this seems to be let's look at group studies to see how groups apply, and if there's any sort of higher risk, that that's enough to take that whole group out of the picture; whereas, experts in diabetes and other disabilities say no, we can take 20 people with this medical condition. We can tell you which ones are safe, and which ones aren't. And I do think that the EEOC's voice as the medical review board continues would be very useful in helping them to understand that the rest of EEO law and the rest of employment law has really gone into the direction of individual assessment. And that was the role I would see the EEOC having.

COMMISSIONER GRIFFIN: Thank you, because as Commissioner Ishimaru just mentioned, we are concerned about this blanket type of exclusion, coming up with this one test for all exclusion based on risk, and a lot of times not very good science. And really getting away from an individualized assessment about, in the case of someone with epilepsy, when was the last time they had a seizure? What kind of seizure do they have? Does this really impact their life and their ability to do the job at all? And so, we are very concerned about that.

Unfortunately, in the public safety types of occupations, it's really - we can't litigate those cases. We can investigate them, conciliate them, but it's really the Department of Justice who has that responsibility, so I know that's something that the people from the Epilepsy Foundation are very concerned about. Gary Gross is here from that organization. He's their legal director, and I know that he's raised that issue with us on numerous occasions, so it is something that I think I can safely say the EEOC is concerned about and that we're looking at. And I'm really glad that you raised that.

Anyone else, Mr. Ashe or Mr. Alvarez, know of any other connection with employment testing and federal sector issues, at all?

MR. ALVAREZ: I don't, Commissioner.


MR. ASHE: You've got the intersection with pilots and regulations of both commercial pilots and private pilots who fly for corporations and stuff, and similar regulations there.


MR. ASHE: And I would think flexibility ought to be available when you've got three pilots in the cockpit, or even two pilots in the cockpit, which I would think might be less flexible if you've got one.


MR. ASHE: Speaking as a passenger.


COMMISSIONER GRIFFIN: Yes. Mr. Alvarez, you talk about that murky middle, and these facially valid piece-of-job tests, and are you arguing that validity studies shouldn't be required when other alternatives exist, such as sort of when an employer can prove the factors that you list in your statement that we have about --that come out of Guardians case?

MR. ALVAREZ: I guess I'm arguing that there should be a validity process that isn't so --


MR. ALVAREZ: --so unreachable for so many employers.


MR. ALVAREZ: I think you should have validity in some fashion, but I just think there is a big void right now out there, that needs to be filled; otherwise, employers will just not do it, and that's not a good thing.

COMMISSIONER GRIFFIN: Yes. Okay. Thank you very much.

MR. ASHE: One thing employers can do to economize is what the electric utilities have done. You get trade organizations to do massive validation studies for sales clerks or cashiers, or what have you. And even though you've got a small operation, at very little cost, individual members can give those tests, which are very well validated. And EEI is currently 25 and 0 in defending its tests around the country.

COMMISSIONER GRIFFIN: Yes. I just want to add to that. But there is a concern, I mean, some of the trade organizations and the public safety, firemen, policemen-type jobs have come up with sort of this risk analysis when someone has epilepsy, that I'm not sure there's really good science behind that. And it truly takes away the individual assessment.

MR. ASHE: I was not saying to respond to that. I was responding to Fred's concern about the cost to an individual employer, paper and pencil test. And when it's shared, it can be a lot less.


CHAIR EARP: Sounds good. Thank you very much to this panel. We'll invite the experts, the final panel of the day, experts on test designing, validating. The Vice Chair reminded me that all of our speakers today are experts, and that's true, but, you are the experts on the tests that they're either defending, or otherwise fighting, so we're very anxious to hear from you. Why don't we start with you, Dr. Outtz?

DR. OUTTZ: Thank you, Madam Chair, Commissioners. I would like to thank the Commission for providing me the opportunity to speak today on the important topic of employment testing. I'm going to discuss three topics. First, I'll address how employment tests and other selection procedures are used. Next, I'll examine methods for minimizing the likelihood of discrimination. And finally, I will explore emerging trends and challenges. I'll give special emphasis to the interrelationship between sound personnel practices and scientific research, particularly research in the field of industrial and organizational psychology.

The use of employment tests and other selection procedures - employment tests are quite prevalent in America. Employers, both public and private, utilize a variety of standardized selection devices and procedures to make staffing decisions. Employment selection instruments commonly used today include cognitive ability tests, physical ability tests, assessment centers, work samples, structured interviews, situational judgment tests, background checks, integrity tests, and other personality measures to mention just a few. These tests are used for such staffing decisions as hiring, promotion, job assignment, selection, and determining minimum qualifications, again, to mention just a few.

It should be noted, however, that the use of tests extends beyond employment. College admissions, professional licensure, and other high stakes selection decisions also rely heavily on tests of one sort or another.

I'd like to discuss for a few minutes minimizing the likelihood of discrimination. Before discussing specific ways to minimize the likelihood of employment discrimination, it may be helpful to describe the history of the employment testing, employment discrimination controversy.

Title VII brought an exponential rise in governmental and legal involvement in personnel selection issues beginning in the 1970s. Personnel selection practices had been the purview of human resource professionals and professional psychologists. One result of this mushrooming government and legal involvement was a focus, attention on the nexus between employment discrimination case law and what was considered best practices, or best practice in employment selection. These external influences on personnel selection created a number of concerns. One was that employers would abandon valid, objective, merit-based selection practices in favor of highly subjective, arbitrary procedures designed to comply with legal requirements.

Another concern was that scientific advances in selection would be overshadowed by programs and practices aimed at balancing selection outcomes on the basis of demographic characteristics, rather than increasing the accuracy of selection decisions.

Since the mid-70s, there's been an increase in the number of research studies investigating not only validity evidence associated with different selection devices, but also the degree to which those devices produce adverse impact and, thus, are possibly discriminatory.

The issue of validity combined with adverse impact has moved forward and still exists today. It's labeled alternatives, alternative selection devices, and we've heard some testimony about it here today, that if an employer has a selection device that the employer proposes is valid, the plaintiffs may come back and say well, that may be valid, but here's another procedure that's just as valid but has less adverse impact. That started really in the mid-70s.

Two important findings that emerged from this research conducted to look at both validity and adverse impact were as follows, very important findings. Some tests have less adverse impact than others; however, the research typically shows that the average score for minority applicants is almost always lower than that for non-minority applicants, regardless of the test. The average criterion or job performance score of minority group applicants, particularly African Americans, is typically lower than that of non-minority applicants, and I believe we heard some testimony to that effect from the previous panel.

Now some chose to interpret these findings as indicating that employment tests were valid and non-discriminatory; that is, the tests simply reflect real differences between groups in their ability to perform a job. That argument is still made today. Others have chosen to interpret the findings as indicating the possibility that both the tests and the measures of job performance are biased and unfair to minority group members. Thus began a debate, both legal and scientific, over what validity really means and what constitutes a fair test or selection procedure. That argument still exists today.

One aspect of this debate focused on the issue of test bias. I believe one of the panel members was asked earlier, Mr. Robinson was asked, when he took the test for the apprenticeship, could he look at the test and see something was wrong with it? Could he sort of see that the test was biased?

Some researchers, in terms of the bias argument, some researchers have interpreted the average score differences between minority and non-minority applicants on employment tests as an indication that the tests were simply culturally biased; that is, test content was geared toward information more prevalent among non-minority group members giving those persons an unfair advantage. As a result, they argued, the test produced systematic errors in measurement or prediction. Issues such as test bias led to a broader discussion of fairness. The fairness debate was more than a scientific exercise. The EEOC guidelines stipulate that the fairness of a selection procedure should be investigated whenever possible. This is one of the vexing issues that comes up between defense attorneys and plaintiffs attorneys about whether the guidelines ought to be changed. This is one of those issues, where the fairness ought to be investigated, and what is fairness?

Fairness is, in part, a social term that encompasses consideration of testing outcomes. Some researchers proposed a model of fairness predicated on the relationship between test scores and performance outcomes. According to this model, a selection procedure is fair only if the proportion of minority applicants selected on the basis of the procedure is equal to the proportion that would be selected on the basis of actual job performance. A rather common sense notion; that is, if you could select people with full knowledge of how they would perform, what would be the proportional difference in minorities and non-minorities hired? The test should reflect whatever that difference is, or if the test is unfair in some way.

This sort of common sense approach has really been used by many, including courts, in my opinion, to decide Title VII cases. Well, does it make --this test is a proxy for your actually being on the job. Is it a decent proxy, and are the proportions that you find with this proxy the same as the proportions you would find if you would actually hire people and put them on the job? That's sort of the common sense. I'm not sure the courts were aware that they were using this model, but they were. They have been, rather.

There were other fairness models offered in the 70s. Agreement on a given model has proven elusive. Debates about fairness revealed a concern beyond the specific issues. It became clear that there was a sense within the scientific community that employment tests and other selection procedures were under attack by advocates of social change. Most within the scientific community believe that the scientific issues in selection should be kept separate from the social issues. I really don't agree with that. I think that the science and the social issues merge, and you have to use some common sense to deal with them.

The fairness issue has continued today. It's related to one of the trends that's happening today, which is a look at implicit bias, which is unconscious bias in selection. Can unfairness exist and the perpetrator be unaware that this unfairness exists? I'm going to move to a different area because of my time, to reducing adverse impact.

Researchers proposed back in the mid-80s that using different predictors in conjunction with cognitive ability tests; that is, using a bundle of predictors might improve validity and reduce adverse impact, but they cautioned that a database didn't exist back then to determine that. That database exists today. The outcome of science since then has shown that you can combine predictors with cognitive ability tests and reduce adverse impact. So one way to reduce adverse impact and, by implication, reduce the probability of discrimination is to combine predictors or tests, use a variety of tests measuring different attributes, as opposed to a narrow test, like cognitive ability, that you put all your weight on, which has a high degree of adverse impact.

Another method is to alter the medium that you use to present the testing information. You can do it visually, you can do it through hearing, you can do it in any number of ways, using video-based presentation of information, having the candidate respond orally, as opposed to in writing. There are any number of combinations that you can use.

A third way to do it is to differentially weight the different components of the selection system. What weight are you going to give in college admissions to the SAT, versus the letter of recommendation, versus the history of the student, and so forth? The issue really isn't whether you ought to use the SAT. I mean, we can muster up enough data to show that the SAT has some validity. The question is, how much weight do you give to it, compared to the other ones? Therein lies probably the level of adverse impact.

So three things I've mentioned is combining predictors, the second is the medium that you use to present the test, and differential weighting. Emerging trends, there is emerging a consistency between stakeholders as to what adverse impact is, and how it ought to be addressed. The scientific community is catching up, believe it or not, with the EEOC guidelines when it comes to the notion of well, adverse impact, shouldn't we really study it as a scientific endeavor?

There was a time, years ago, when it wasn't considered a scientific endeavor at all to study adverse impact. It was just a fact; it represented true differences between subgroups, and didn't need much study. That's changed a lot.

Also, an emerging trend is that the definition of performance is being refined. If you're going to try to predict performance, what do you mean by performance? Do you mean individual performance, or do you mean organizational effectiveness? The argument that universities use that we are not trying to select individual students, we are trying to construct a class of students with certain characteristics to attend our university. It's a different criteria, would require different predictors.

Future challenges - there's a critical need to be more accurate in describing what you mean by a test. We talk about alternatives, we've heard testimony about alternatives, we've heard testimony that certain alternatives are better than others, and we've heard terms like work samples, assessment center, situational judgment test, what does that mean? Just within the category of work samples, covers a multitude of tests, so we need a better definition of what we mean by a test, before we can even engage in whether we have an alternative.

Also, in most selection situations, there isn't one particular group that's affected. There may be three or four by race or ethnicity. The employer has to minimize adverse impact for all of them under the Uniform Guidelines. That's a difficult thing to do, and that challenge still exists for employers today. And I think it's a very difficult thing to do. Thank you for your time.

CHAIR EARP: Thank you. Dr. Lundquist?

DR. LUNDQUIST: Good morning, Madam Chair, Madam Vice Chair, Commissioners, ladies and gentlemen. Thank you for the opportunity to share some of my thoughts with you about the role employment testing can play in creating a fair and effective workforce. As the last of your speakers today, I will try to be brief.

I have had the opportunity to work for employers in creating tests for plaintiffs, and evaluating employers' tests, and in settlement where everyone came together and said let's fix this and go forward. That's been my role in Ford, in Coca-Cola, in Abercrombie & Fitch, and in other public sector kind of testing situations, so I've been on the front line of now what do we do? And hopefully, some of what we see might be helpful to the Commission as you move forward.

Professionally developed selection procedures can serve a legitimate business purpose. They allow an employer to base hiring and promotional decisions on solid job-related information. Particularly to the extent that you've heard today, that the test looks and feels and asks people to perform in the way they do on the job, those decisions will be better predictive of how the person will ultimately perform the job.

The evidence that a selection procedure measures behavior consistently; that is, is it reliable, is it consistent, is it going to tell me the same thing about the person today and tomorrow, and two months from now? And is it an accurate measure of job performance; that is, is it valid? That's the basis on which we make a determination about whether the procedure is job-related.

Job-related procedures ensure that employees possess the necessary skills to perform the job, and those procedures are used to predict who will be successful on the job.

In short, good selection procedures are fair to candidates. They're standardized, and they're objective in their administration and scoring, and they're useful to organizations; that is, they can result in overall productivity.

In terms of the standardization and objectivity, I believe that that's an area that the Commission's efforts could be directed to, particularly in terms of how a well-validated test is used on an ongoing basis, where cutoff scores are put on tests, for instance, as well as the extent to which the test information has become obsolete, or no longer predictive of performance on the job.

One of the things I would like to focus on is the challenge of ensuring that the procedure itself fairly represents what the individual is able to do on the job. Particularly with the advent of computer technology, it's been possible to simulate more of the job, and to give a person an experience that is very much like the job in terms of assessing their skills, so it's no longer necessary, for instance, to use paper and pencil multiple choice tests in order to measure a person's potential performance on the job.

Research in industrial psychology has shown that high fidelity selection procedures, such as work samples, video simulations, and assessment center exercises can enhance the candidate's acceptance of the test and often reduce adverse impact. In fact, the research is showing that the extent to which a candidate believes that this is a good measure of what they're ultimately going to be doing on the job will increase their motivation to perform well on the test and is likely to reduce the amount of adverse impact that's experienced on the test, so there's some not simply that it appears to be like the job, but it actually changes the performance of the person who's going through the test to the extent that they buy into the fact that this is a reasonable thing to ask me about for the job I'm applying for.

In an attempt to search for less adverse alternatives, in our recent work on the Ford apprentice testing program, we reviewed the research literature on alternative testing measures that demonstrate good validity with less adverse impact. Test administration format and test medium is an important factor in this equation, as Jim has mentioned.

Research has shown that video-based testing has comparable validity to paper-based tests and lower adverse impact. That adverse impact was found to be reduced by enhancing applicant's job-relatedness perceptions, by positively impacting test taker motivation, and reducing the reading comprehension demands of the test. And we've provided for you some comparisons of paper and pencil, computer-based, and video-based testing.

In addition, research has shown that for some jobs, measuring more than just cognitive ability can result in better prediction of overall job success and result in lower adverse impact. Considering both the cognitive and the non-cognitive aspects of the job, for instance, conscientiousness, as was mentioned earlier, or customer focus, may give a more complete picture of the candidate's qualifications. Oftentimes, it's not just what you know; it's how you show it. And when employers start to measure that full picture, they get a better --they're better able to predict ultimate success on the job.

Work sample or situational judgment tests have also been shown to be promising ways to maintain validity and decrease adverse impact. These assessments are designed to mirror or simulate the actual tasks performed on the job, for instance, through a manager in-basket exercise or a video simulation of a production line. That is, in fact - the video simulation of a production line is, in fact, the test that we are trying out at Ford for the apprentice program, where we are teaching - we're animating a production line and teaching concepts about this fictional production line, and then asking candidates to answer questions about what they've just been taught.

Such tests measure the ability to identify and understand job-related issues or problems, and to select the proper course of action to resolve the problem. Their good validity stems from having the candidate actually perform part of the job, and their reduced adverse impact appears to result from candidate acceptance and motivation.

Now on the full range of different kinds of item-type simulations, trainability tests, or measures of interpersonal characteristics, we've provided to Carol for the Commission's use, sample test items so that you might get a chance to experience what that is, including some of the video clips that we've developed to measure in simulations.

In our work on the Ford testing program, we've combined a cognitive test, a non-cognitive assessment, and a video-based simulation to measure candidates' qualifications for the apprenticeships. And we expect that validation will occur, as Jeff mentioned, later this summer. It's important that we have both the full picture in terms of predicting the job, but also to have sufficient information to provide feedback to candidates, like Mr. Robinson, of where their areas for improvement might be, and where they might be successful in spending their efforts before they come and take the test again.

The two areas that I'm most concerned about in terms of the Commission focusing its emphasis beyond what's already been presented by earlier speakers is really this operational validity concept about the test. You can develop and validate a test, and then put it into use in the field, and somehow the way it's being administered is not consistent, it's not being scored in a consistent fashion, or over time its validity is eroding. And, as we found from looking at most human resources processes, if you are not engaged in ongoing monitoring and checking of the procedures, simply having done a good job in the beginning in putting it out there is not going to be sufficient to make sure that either the employer is getting the best valid evidence out of the test, or that the candidates are getting a fair experience, so I would encourage you to think about some procedures for reviewing how employers are using tests on an operational basis, along with initial validity.

In addition, I think it's important to recognize good testing programs. There are a number of good testing programs, even in the murky middle that we talked about before, and it's important for employers, I believe, to feel that it is possible to measure the qualifications for a job in an objective way, in a way that candidates will buy-in, and in a way that they can move forward and not be sued, or at least prevail if they are. Thank you very much.

CHAIR EARP: Thank you. I think we'll do one brief round of questions, and then closing statements. Vice Chair?

VICE CHAIR SILVERMAN: I want to thank this panel of experts, as well as the last one. I just jumped right into the questions, I'm sorry.

To just walk us through a little bit about what you do, practically, so understanding it - when you're hired as an expert and you do find that there's an adverse impact, how do you actually go about determining whether there are less discriminatory alternatives?

DR. LUNDQUIST: Well, usually, the first place you start is by looking at the original validation study to see if the person who was developing the original validation study considered alternatives at that point in time.

There may also be possible ways to look at the existing information about how the test is used, either what passing score is used on the test? For instance, is it set higher than is required by the job, lowering it might reduce the amount of adverse impact, different weighting, as Jim has talked about, may change it, so there are ways both by looking at a completely different test or series of tests, or by looking at how you manipulate the existing test in terms of its weights or passing scores to reduce adverse impact.

DR. OUTTZ: Typically, I would start with a job analysis. Hopefully, there is an existing job analysis in terms of those who develop the test, they did a proper job analysis. If that is the case, then one can start from there to determine what they were trying to measure with this test. You know what the job is about now because you've done a sufficient job analysis. That job analysis should tell you what you ought to be measuring, and so is there a logical trail from the job analysis to the particular instrument that they're using and what they're trying to measure. Once you establish what they're trying to measure, and here's, I think, the important point - there is literature today, there's a lot of research that's been put into how you measure things differently and get the same outcome. You measure them accurately in different ways. There's published research in refereed journals in IO psychology about the combination of different predictors, and so you can then estimate from that the likelihood that you might have an alternative. You also have the data from the actual test. You can actually run studies using different combinations, weighted different ways to see if the validity is the same, but there's less adverse impact.

VICE CHAIR SILVERMAN: So there are times when you look at the test and say there's nothing else out there that has the less adverse impact, I mean, after you go through all of this. It seems like you'd always get to something, or am I wrong, that has less adverse impact? I mean, isn't there always something out there that seems like it would --

DR. OUTTZ: I think the standard for demonstrating that it has less adverse impact, and equivalent validity, or similar validity is a rigorous one, so that's not easy to show.


DR. OUTTZ: So I don't think you would find, yes, there's about 15 tests out here that you could have used that have less - you have to demonstrate that, you can't simply assert that.


CHAIR EARP: But you might find less adverse impact on a particular group. You mentioned that the employer would have to satisfy or minimize the impact on every group that's in the workplace, age, ethnicity, race, gender.

DR. OUTTZ: Correct. The employer faces that burden; although, there does seem to be, depending upon the dimension being measured, a progression of adverse impact by race and by ethnicity. For example, we will find that cognitive ability tests tend to have the greatest adverse impact against African Americans, and have far less adverse impact - not, excuse me, far less, but would have less adverse impact based on ethnicity and other characteristics, and certainly not gender.

DR. LUNDQUIST: But to the extent that you're looking at the total picture of the person to match it to the total picture of the job, you may be in a better position to not have that alternative very available. In other words, you've incorporated alternatives, and oftentimes the less adverse alternatives are non-cognitive measures. So to the extent that you're looking at the total picture, it makes it less possible to have substantially the same validity but less adverse impact.

DR. OUTTZ: And I might add that cost and other factors come in, also. Okay, you propose an alternative, and your alternative costs the standard $1 billion. Well, the organization says I don't have $1 billion, so there are other factors that come into play here, so that it isn't easy, really, to show that there is an alternative.

VICE CHAIR SILVERMAN: So that's focusing on your job, and our job is to try to make this a little bit clearer and easier. I know, Dr. Lundquist, you've already provided us some guidance in this area on what it is that we could do. I'm wondering, Dr. Outtz, if you have any thoughts?

DR. OUTTZ: I think training and awareness is the biggest issue. The issue of testing, validity, fairness, alternatives, is something that to really discuss properly would take you three days. If you put 50 IO psychologists in a room, they would take eight days.


DR. OUTTZ: So it's awareness, first, training, and developing the expertise to address these issues with employers.

VICE CHAIR SILVERMAN: I just have one further question. We talked earlier about testing for conscientiousness, and I'm just dying to know what that is.

DR. LUNDQUIST: Well, that's in those sample items I gave Carol. Conscientiousness is part of a bundle of research that's been done on personality factors that make a difference in the workplace. So, typically, conscientiousness has been shown to be the most predictive of the big five personality factors that could be measured in the workplace, or generally are seen across studies.

It's oftentimes measured by self-report, so you may ask a person questions about their background or experience that would indicate in their past work what kinds of behaviors they engaged in in the workplace, or how they were perceived by others in the workplace, which tends to predict conscientiousness. Sometimes, there are personality measures that are less self-report that can be used, as well.

CHAIR EARP: And is there a correlation to cultural concerns or ethnicity when you're looking at a test like conscientiousness because not all groups have that kind of --my son, what he considers to be his conscientiousness is far down on my predictor of what it should be.


DR. LUNDQUIST: Yes. We haven't been looking so much for the age-related differences. In terms of ethnic differences, we find far fewer differences among groups, but that really is restricted to those groups who are typically acculturated within the United States and are here. When we've done similar tests for employers where the tests are used globally, the norms are quite different for things like the non-cognitive side of performance, than they are for what we see in people typically acculturated in our culture here.

CHAIR EARP: Interesting. Leslie, are you done?

VICE CHAIR SILVERMAN: I'm done. Thank you.

CHAIR EARP: Commissioner.

COMMISSIONER ISHIMARU: Fascinating, Dr. Outtz, you had talked about a number of alternatives. Would these alternatives have as much or better validity as a cognitive-only paper and pencil test?

DR. OUTTZ: The research shows that they would have at least as equal validity. Sometimes, they might add incrementally to the validity of a cognitive ability test, seldom would they exceed a cognitive ability test that's used by itself, so that the combination of the cognitive ability test with these other measures, typically, is equal to the cognitive ability test, and sometimes slightly higher than the cognitive ability test alone. So they certainly meet the criterion of offering similar validity and less adverse impact.

COMMISSIONER ISHIMARU: And, Dr. Lundquist, you talked about these job-like tests that are out there. I would assume that there are studies, again, that show that these are as equally valid or more valid.

DR. LUNDQUIST: Well, they have been shown to be, in some cases, more valid, and certainly, often equally valid, as well, so there really is a great deal of promise. But the promise largely comes from the availability of the technology to do these kinds of things.

COMMISSIONER ISHIMARU: I see. And that's something that's really recently come about.

DR. LUNDQUIST: Within the past 10, 15 years.

COMMISSIONER ISHIMARU: One thing that's always of interest to me is that quite often adverse impact in these examinations is exacerbated by how the employer uses a test, where they set the cut score, doing it in rank order, this is the end-all and be-all. How do you advise employers where to set the cut score on tests, or whether they should be using rank order, or whether they should be using banding? I throw that out there to both of you.

DR. OUTTZ: Having written on banding quite a bit, I typically, suggest that an employer not use strict rank ordering unless there is validity evidence to substantiate it, so that the method of use should dictate. And level of validity associated with the method of use should dictate whether you should use it in a particular way.

I have difficulty finding situations in which one can justify strict rank ordering for any selection device, really. I mean, it's like saying that a student back in the day when the SAT was -- the maximum was 1600 points, a person with 590 is far more qualified than the person --a person with 1590 is far more qualified than the person with 1585. I don't think so, and most universities don't think so, so there is a difference, but it's not a meaningful difference, so that exists with any measure. Any measure has some variability in it that's due to random chance, and therein lies my difficultly with strict rank ordering, so I would typically recommend some more flexible use of a test than strict rank order.

DR. LUNDQUIST: From a pragmatic standpoint in advising employers, we usually look at a couple of things. We look at the job analysis for the information about really what level of that particular characteristic or skill is required on the job. We will look at how currently performing individuals on the job do on the test. So, for instance, if you give the test to the current workforce, and they don't do very well on the test, perhaps that standard is being set inappropriately. We'll look at what the consequences of error are for people who don't perform at that level. And, frankly, we'll look at the relative amount of adverse impact.

At the end of the day, I agree with Dr. Outtz, that the measurement characteristic of the instrument itself, or of the battery of tests that you're using, has to be considered. So if it's only good within plus or minus 10 points to differentiate, then you need to be taking that kind of information into consideration when you're making decisions and using the test, so you need to set up bands in which you'll look at individuals, for instance, or look at that in terms of where you're going to set your cut score.

COMMISSIONER ISHIMARU: Thank you to both of you. Thank you, Madam Chair.

CHAIR EARP: Thank you. Commissioner Griffin?

COMMISSIONER GRIFFIN: I want to ask a little bit about the video-based tests, which seem to, according to both of you, have less of an adverse impact. I know you gave Carol some samples, but I'm impatient, so can you explain what they consist of? Is this like simulation-type, computer simulation of a job or something like that?

DR. LUNDQUIST: Yes, it can vary. The situation --the test that we've provided for you to look at is really a situation where you're using the test to select whether or not somebody should be promoted into management. And you provide the person with some almost day-in-the-life kind of information about this job into which they're being placed, and then you might show them a video clip of an employee coming in and bringing a problem to them with whatever information is available to them, and then you ask them how they would handle that particular situation.

COMMISSIONER GRIFFIN: And then you do that by paper and pencil. Is that correct?

DR. LUNDQUIST: Yes, they can answer the question at that point by paper or pencil, or touch pad, or whatever. Since the advent of doing computer administered testing more inexpensively and more accessibly, you can actually create something that almost looks like the in-box of the person so that you could listen to phone calls, or you might say gee, I'd like to know what that person's performance evaluation was, so you could click on the performance evaluation before you answered the question and looked at it, or look at what questions --what kind of message slips you might have. Those kinds of things just give it more of the look and feel and more of the way a person would process information in the job.

COMMISSIONER GRIFFIN: Do you know --have you run across any situations where an employer would have to accommodate someone, let's say with a visual impairment, in taking a test like that?

DR. LUNDQUIST: Yes, because with most of the employers that we deal with, their large-scale use of the test. It makes it difficult to do that, obviously, on a video-based, but it's been our experience that that has to be individually dealt with, particularly in terms of how the person might be accommodated on the job, and then looking at how you'd accommodate them similarly on the test, if that's possible.

DR. OUTTZ: Video-based testing simply allows the candidate to use a broader spectrum of abilities, a spectrum similar to the spectrum that would be used on the job. As a manager, if you watch a video of employees interacting a certain way, you see non-verbal behavior, you see verbal behavior, and so forth. You then are asked how would you address that if you were a manager. That has a lot more fidelity than writing that out on a sheet of paper, saying here's a written version of that situation; now tell us what you would do.

Video-based simulation is simply, for me, is in the genre of changing the medium. And I've done that all the way from actually, for example, creating a mini-training situation for candidates that's exactly like the training situation they're going to be put in later on, to using some proxy for that, like a video, or in the case of firefighting, showing them an actual picture of a building with flames, and having them react to that, as opposed to a written scenario saying here is a building that is on fire, and the fire is on such-and-such a floor. So it allows them to use different abilities, a package of abilities similar to the ones that they would use on the job; and, therefore, typically would have less adverse impact.

I should caution, however, that none of these constitutes a panacea. They all have their pros and cons. Sometimes they work, and sometimes they don't.

COMMISSIONER GRIFFIN: Why does it eliminate adverse impact?

DR. OUTTZ: Because, I'll take the simple example of, say, a manager in the fire service. In the fire service, when you come upon a scene and there's a fire, you have to use your technical knowledge and expertise that you've gathered through experience and through training to know how to fight that fire. The information comes to you through different stimuli. You see the flame, you see the location of the flame, you see the intensity of the flame, you see the direction of the flame, you see the direction of the wind, these are all things you see, and it only takes you a split second to do that.

If I give you a written version of that, I'm simply asking you to interpret words. That is injected into the measurement, so much so that some would argue that the measurement becomes irrelevant now. It's really a reading comprehension paragraph, so that a person who actually knows how to put out a fire might perform far less well on the paragraph than when they see that flame and they know exactly what to do. And that tends to not vest itself in any particular racial group, that they can handle that. That's why you see people out on the job, who can do the job, but when you give them a paper and pencil test, by some group, like race or whatever, one group scores below the other.

DR. LUNDQUIST: It also tends to reduce test anxiety. When you're actually showing - people are more comfortable looking at that movie of the situation they'd be in than when they're faced with this multiple choice question, and they have to answer the question. There's a whole variety of factors, and I don't think we know exactly which ones and which buttons to push. There's still a lot of research going on about it, but it does seem to have that effect.

CHAIR EARP: It might, also - just to comment, follow-up your point - it might also remove some situations where stereotypes play in very, very heavily. For example, in corporate situations we remain concerned about Asians, let's say, as a group, who don't have under-representation at entry level, but as they move up in the organization, there are fewer and fewer because of the stereotype that either they don't make good managers, not good leaders, too quiet, too inscrutable, whatever.

It occurs to me that in the in-box situational scenario that you lay out, that let's that person be and react in an environment that they know without the burden of someone else's stereotype.


DR. OUTTZ: I would agree. I would also offer the caution, though, that, as I said in my remarks, the research has shown that no matter what you use, whether it's a work sample, whether it's an in-basket, whether it's video-based, you will typically find some difference, usually to the disadvantage of minorities, so that's -- we don't know why that is. I wish I knew, and we're studying that, but that is the case. You can reduce adverse impact substantially, which is what the guidelines require. Thinking that you're going to eliminate adverse impact totally across all of the groups that are represented is probably unrealistic at this point, in my opinion.

COMMISSIONER GRIFFIN: The only problem I see is, aside from the accommodation of someone who is visually impaired taking the test who could perpetuate the myth that you have to see. In some cases, you do have to see, but in managerial positions, a person who's blind or visually impaired doesn't have to see that body language always to really be a good manager and know what's going on.

DR. OUTTZ: Absolutely. I think thatit's an individual issue, and it has to be addressed very, very carefully.


DR. OUTTZ: As to what accommodations are necessary, and you should make those accommodations.

DR. LUNDQUIST: And consistent with what you're planning to do to accommodate the person on the job, if they're successful.


CHAIR EARP: The fact that we still have an audience at this hour demonstrates how important and complex the issue is, so I would ask that the Commissioners do closing statements, and then we'll end.

VICE CHAIR SILVERMAN: As the Chair just indicated, this isn't the most flashy or headline grabbing topic that the Commission has taken up, but I think that everyone in this room can agree that it is still incredibly important because it is vital to what we do. We know there's been an increase in the use of tests and other screening devices by employers. The internet makes it easier, and often from an employer's perspective, the internet makes it necessary.

Some of the witnesses today have talked about the many benefits of tests and other objective screening devices, such as identifying strong candidates who will likely be solid performers and decreasing reliance on stereotypes or unconscious bias in the selection process. On the other hand, they're clearly not the panacea. We still need to ensure that in terms of discrimination, they are not doing more harm than they do good.

I'm so pleased that the Commission is taking this up right now, because this area is critical to several Commission priorities, including the Chair's E-RACE Initiative, which is aimed at combating race discrimination, and my Systemic Initiative, because after all, all of these cases raise class issues. And when the Commission approved the Systemic Initiative last year, we talked about how we have a need to be more proactive on this issue, and this really presents an area where we certainly are trying, and we can do more.

Also, as part of the Systemic Initiative, we talked about the expanded use of technology, the need for industrial psychologists and other experts, and I'm happy to report that since the Commission approved the recommendations last year, much progress has been made in that area, so we are moving that way.

Of course, the issues that we're grappling with today have an even broader impact than just those issues, because they're ultimately critical to this agency's effort to combat discrimination in hiring and in promotion. And particularly in the hiring area, that presents just a special issue for us. It's very difficult to get to, because people often don't know why they have been discriminated against, and so it's incredibly important that we are spotlighting this issue today.

I really appreciate the testimony from all the panelists. I want to especially thank, once again, Ms. Liles and Mr. Robinson for coming forward and talking to us today. And many thanks to Jean Kamp and Jeff Stern, and all of your colleagues back in Ohio and Milwaukee in the enforcement, as well as the litigation side, who helped develop these cases, and bring them to such successful conciliations and resolutions. And to all of our panelists, all the experts that we brought forward, for bringing your insights and recommendations.

I think at bottom, in some ways I agree, in many ways I agree with Mr. Alvarez' sentiment, that in many ways, employers are between a rock and a hard place on the use of tests and other screening devices. But one thing is clear, any guidance that we could provide would be extremely beneficial to employers, as well as courts, and the public, in general, so I really look forward to working with my colleagues, and our Office of Legal Counsel as we move forward on these issues. Thank you.

CHAIR EARP: Commissioner.

COMMISSIONER ISHIMARU: Thank you, Madam Chair. I want to thank you for holding this meeting on this very important issue as part of our E-RACE initiative, and I want to really thank the people at the Office of Legal Counsel.

When I looked at the line-up for today, I thought how is this going to make sense? It struck me from reading the line-up that this was a huge hodge-podge of very interesting things, but not that it would pull together, and I think it did. I wish we had more time, and perhaps at some point we will, but my congratulations on an excellent hearing.

I think it's clear that the EEOC, employers, and stakeholders should be focused on less discriminatory alternatives, as we did in our case in Ford. From an employer standpoint, these alternatives get you good workers and reduce adverse impact. From the enforcement standpoint, less discriminatory alternatives win cases.

Cognitive tests have their uses, but they also have a lot of adverse impact. They're sort of like the SUVs of employment selection. They'll get you there, but the cost to the environment is great. Employers should be moving towards using the less discriminatory alternatives that were highlighted in Ford, and less discriminatory alternatives should be part of our regular training for investigators and attorneys.

Taking my SUV analogy a little bit farther, these are sort of like the new hybrid cars, both get you to where you want to go, but hybrids, like less discriminatory alternatives, have much less adverse impact on those around you. In fact, these less discriminatory alternatives may be even better at getting what you want, identifying even better, more qualified workers.

As employers compete for better and more qualified workers, I think that they would do well to consider Mr. Klein's testimony regarding the lack of validation evidence for credit checks and how criminal background information should be used. And I hope, Madam Chair, that we will issue, in the not too distant future, some sort of guidance document on credit checks, particularly keeping in mind the analysis set forth by Mr. Klein.

I believe, though, that it was unfortunate that we did not hear today from a speaker on the topic of selection devices and criteria that especially affect members of different national origin groups, such as English fluency tests and English-only policies. Given the current anti-immigration sentiments that we are seeing and the debate on the immigration bill in the Senate as we speak today, I think this topic merits our attention.

The Commission has cited in our most recent strategic plan and in our budget request statistics from a Gallup poll that the EEOC co-sponsored, finding that Asians and Hispanics report experiencing significant amounts of discrimination, much more than the number of charges filed with the EEOC would suggest. If the Commission is serious about addressing that gap, we need to ensure that we are inclusive and diverse in the types of issues that we cover in meetings such as these.

Some of our field offices are doing excellent work in the area of English fluency examinations. For example, last year our Phoenix office settled a case against a Utah candy maker that had instituted an English proficiency examination. We received charges from workers who had worked for the employer for six years without any performance problems, or any concerns being raised about their ability to speak English. These workers held manual labor jobs that involved the scraping of candy out of bowls and into machines. Once they failed the English test, they were fired. Both Asian and Hispanic workers were fired because of these test results.

Through conciliation, our Phoenix office got the employer to drop the test, pay money to the victims, and to establish a scholarship fund for its employees. To the employer's credit, the case was settled early on.

Last year, our appellate section filed a very successful amicus brief in the case of Maldonado v. City of Altus, where we argued, pursuant to our regulations on this issue, that English-only rules have an adverse impact on language minorities and must be shown to be job-related and consistent with business necessity. The Tenth Circuit agreed with our arguments and reversed summary judgment for the employer.

Recently, our New York office filed a case against the Salvation Army in Massachusetts for its English-only policy that resulted in the firing of two Hispanic workers. These women had worked at the location for five years sorting donations without any complaints about their conversing in Spanish.

Certainly, there are times in the workplace where speaking and understanding or reading English is a requirement, and employers should be allowed to test for them. All workers understand that to get ahead in this country, a worker needs to speak English. The reality is that there are long wait lists for adult English classes, and taking classes require time, money, trained teachers, transportation, and, many times, childcare. Penalizing individuals who do not speak English when the job really doesn't demand of it, is exactly what the Supreme Court prohibited in Griggs, a built-in head-wind for minority groups that is unrelated to measuring job capability.

The EEOC's longstanding policy on the issue of English-only requirements and English fluency examinations finds the right balance, and I'm proud of the work that our field offices and General Counsel's office are doing on these issues.

So with that, let me thank the panelists for coming here today. And thank you, Madam Chair, for holding the hearing. I thought it was excellent.

CHAIR EARP: Commissioner Griffin.

COMMISSIONER GRIFFIN: I, too, want to thank you for holding this Commission meeting, and thank our Office of Legal Counsel. Anyone that's put a meeting of this caliber together knows that it's not easy, that it's a lot of hard work, and so I commend you for putting this meeting together. And thanks to today's panelists for taking the time. It's at your own time and expense for you to come here, and help educate us about an important issue for employees and employers alike. And as dense as this topic can be, I really have to say it was very interesting. It really was so thank you.

We're seeing an increase in employment testing, and someone suggested this is so difficult for employers that maybe they're not using it and should be using it, but the fact remains, we're seeing an increase in the use of it. And more and more companies accept applications or conduct their hiring process on-line. One study pointed out to me showed that 50 percent of the surveyed companies who used an on-line recruitment process, also administered one or more tests on-line, as well. So when you are receiving thousands of applications on-line, employers are turning to tests to whittle down those applications.

The use of employment testing and screening raises several concerns for me. First, how are on-line processes impacting all potential applicants, including individuals with disabilities? For example, older applicants may not be computer literate; and, therefore, won't be able to apply, or won't apply on-line. We somehow assume that everyone has access to the internet, yet we also know this is an added expense that some people can't afford. Are on-line applications and tests available to applicants who speak another language? For those companies who accept and even require job applications be made on-line, how accessible are their web sites, especially for people who are visually impaired?

For example, there have been numerous lawsuits against retail establishments and banking institutions because their web sites were not accessible to people who are blind and use screen-reader software to access those web sites. Are applicants for employment facing those same barriers, where the employer uses an on-line application process?

Second, I'm especially concerned about the increasing use of employment tests which measure personality traits. It was mentioned, we didn't talk about it very much, and I wish we had. These types of tests can provide a mechanism by which employers do screen out individuals with psychiatric disabilities. I read that two years ago, approximately 30 percent of all companies used personality-type tests in some phase of either hiring or advancement, and I expect that percentage has increased since then.

We know that the ADA requires that employment tests be job-related in order to be non-discriminatory. Such tests can run afoul of the ADA in a number of ways, and Rich Tonowski actually talked about that when he testified earlier about tests like this that constitute a medical exam. And the use of such tests really is only limited to very specific situations.

We also know that some occupations have blanket exclusions of some people with disabilities. For instance, people with epilepsy, people with diabetes, that Shereen Arent talked about, that may be discovered during a post-offer medical exam without doing an individual assessment regarding how that disability may or may not even affect their ability to do the job.

Moreover, not making reasonable accommodations available, including during the administration of the test may also result in a violation of the ADA. For example, time limits may be an important factor in determining test scores. I think you probably see that a lot. However, extra time may be a necessary accommodation for some individuals with disabilities to take the test. An Office of Personnel Management study found that accommodations to time limits for some test-takers who are visually impaired required double the original time limit.

I'm equally concerned about employers that may be using genetic testing of any type. While these tests can provide pre-symptomatic medical information with the promise of early detection of certain illnesses, they can also provide the potential for abuse in the employment setting. Are employers beginning to base personnel decisions on information drawn from such tests? If this happens, will it result in individuals who may benefit from early detection becoming reluctant to actually take those tests for fear that an employer or their insurance carrier may use such information against them in the future?

And like Commissioner Ishimaru, I, too, am concerned about limited English proficiency as a barrier to employment and advancement, when it's truly not related to job performance. While we typically hear about how this affects people who are Asian or Hispanic, fewer people are aware how this impacts individuals who are deaf, who communicate using American Sign Language. All too often, people who are deaf are told they cannot be seriously considered for a job because of their limited English proficiency, despite their being able to communicate effectively.

I hope that we'll be able to address this issue at a meeting in the near future. And I'm hopeful that today's Commission meeting will help us ensure that the use of employment testing and screening will not be used to discriminate against minorities, women, older people, and individuals with disabilities. Thank you.

CHAIR EARP: Thank you. Well, let me add my thanks, also, to all of our experts today and to the staff. I think former Commissioner Alvarez has added to the EEO lexicon with the murky middle, and it's our responsibility now to take the advice, the insights, the expertise that you have shared with us so generously and try to bring some clarity to the murkiness.

That being said, and there being no further business, do I hear a motion to adjourn the meeting?


CHAIR EARP: Is there a second?


CHAIR EARP: All in favor?

(Chorus of ayes.)

CHAIR EARP: The ayes have it, and the meeting’s adjourned. Thank you.

(Whereupon, the proceedings went off the record at 12:57 p.m.).

This page was last modified on June 15, 2007.

Home Return to Home Page