Education Chat

Chat Transcript: Computer-Based Testing, the Focus of our Latest Report

Educaiton Week editor Kevin Bushweller and other experts discuss our report on computer-based testing, "Technology Counts 2003, Pencils Down: Technology's Answer to Testing."

Computer-Based Testing, the Focus of our Latest Report:
Technology Counts 2003, Pencils Down: Technology’s Answer to Testing.

About our Guests:

Kevin Bushweller, assistant managing editor at Education Week;

Randy Bennett, distinguished presidential appointee and director of strategic planning for Educational Testing Service (ETS) Research; and

Carole Givens, a 34-year teaching veteran who currently serves as her school’s technology support person.

Moderator: Ron Skinner


Question from raquelbauman, Guidance coordinator, Lowell High School:
What can be done to ensure that all students have the skills necessary to be at ease with computerized assessments?

Carole Givens:
My school system has put a technology trainer in every high school and middle school in the county. This person has to have been a classroom teacher. The technology trainer does all types of training sessions to help teachers integrate technology into their lessons, students to sharpen their technology skills and parents to understand the purpose of our technology initiative. It has been quite a successful endeavor.


Question from Mark Baptista, Teacher, Fuller Middle School, Framingham, MA:
On the issues of equity and the technologic-divide (i.e. the haves and the have nots), how can testing be done on computers in schools or district where not every student has an upgraded computer available to use for the tests?

Kevin Bushweller:
Actually, hardware availability is a major problem when it comes to computer-based testing. That will continue to be a problem as schools struggle with budget shortages. However, some experts suggest that small, Game-Boy sized tablet computers, which are much less expensive than personal computers, might be the answer to this problem, because schools might be able to put these devices in the hands of every student.


Question from David Dunn, Assistant Principal, Riverview Junior High:
Seeing that online testing is new to many students, would we expect scores to decrease because all students are not familiar with reading questions and taking tests on a computer? Thank you.

Kevin Bushweller:
As soon as a new test is introducted, even a paper test, scores tend to drop at first, then rise as kids get used to the format of the test. Eventually, the scores plateau. Technology introduces more complexities, though, that could have a significant impact on scores initially. In fact, some research out of Boston College shows that students who are not skilled using computers tend to do better on paper tests than computerized exams. However, students who are skilled computer users tend to do better on computerized assessments. This problem is likely to solve itself as more students develop better computer skills, and also have more experience taking computer-based exams. But there’s also an interesting twist to this problem, particularly with children in special education. Computer-based testing offers those children opportunities for more individualized accommodations, such as visual and audio features. It’s possible the introduction of technology could lead to increased scores for some of those students.


Question from Bob Schaeffer, Public Education Director, FairTest: National Center for Fair & Open Testing:
How can users of new computer-delivered testing products be assured that the severe problems that plagued ETS’ introduction of the computerized GRE and GMAT (e.g. widespread “black screen of death” from system failure, erroneous scores not detected/reported for eight months, flawed scoring algorithm, etc. etc) will not result in students being misassessed and faulty decisions made by reliance on such scores?

Randy Bennett:
My recollection is that these problems were very limited. But even if they happen to one person, it’s still very serious. The larger question is how do you prevent errors from occuring, whether it’s a computer-based assessment or a paper one? There have been some widely publicized errors in state assessments over the past few years and some believe that it’s because we are doing more testing, more quickly than testing agencies can effectively handle. A little less frenzied testing wouldn’t be a bad thing.


Question from Audley Chambers, Associate Professor of Music History and Literature, Oakwood Collge:
1) How do you deal with student collaboration in online testing (cheating to be exact)?

Carole Givens:
The computer only allows a student to take the test once. If they go out of the test, it ends their session. I also walk around the classroom. My physical presence is a real help.


Question from Mark Baptista, Teacher, Fuller Middle School, Framingham, MA:
How are answers that show a student’s creativity and open responses assessed or is computer-based testing basically for multiple choice and true & false questions?

Randy Bennett:
In principle, we should be able to present on computer most of the questions we can present in paper-and-pencil, and a whole lot more. Some computer-based tests today include open-ended questions, like essays or simulation problems where a candidate for architectural licensure has to design a building. Some of those tests even use the computer to do the scoring and that scoring is often as good as when done by a human expert. When the computer can’t do the scoring, and creativity is probably one of those cases, the student’s answer can be given to a human judge to score.


Question from j.m. young,educator, elementary and middle school, gesu school:
Can you give instructions on how to set up these type of tests,as well as, time management guidelines?

Carole Givens:
It would depend on the browser that you are planning to use and the test/quiz can take as little or as long amount of time as you want. Every type of testing has its own unique set of directions.


Question from Larry Fruth, Knowledge Manager, Ohio SchoolNet:
Is the movement toward online assessments inevitable with the limited number of quality testing companies and the increased demand via NCLB?

Kevin Bushweller:
No. It is not inevitable. You make a good point: There is a limited number of testing companies. However, our report points out that there is a growing number of new kids on the block that are getting into the computer-based testing industry. So, as time passes, there will likely be more testing companies out there. Ensuring they are high-quality operations will be a challenge. NCLB has had the unusual effect of both encouraging and discouraging the use of online testing. With the law’s increased demands for schools to test more, and conduct more sophisticated analyses of test scores faster, that has opened up opportunities for computer-based testing to fill the void. However, the law also has a host of competing demands that could lead schools to spend their limited resources on things other than computer-based testing. Also, the law forbids out-of-grade-level testing. This has posed a problem for advocates of adaptive testing, which adjusts the level of difficulty of questions based on how well a student answers them. The Education Department is investigating how this problem might be solved, and adaptive testing advocates point out that adaptive tests can be configured to just one grade level. But that is still a point of contention.


Question from Jan MacLean, Exec. Dir. for Curricular & Instructional Programs, School City of Mishawaka, Mishawaka, IN 46544:
Our district is participating in Indiana Core 40 on-line assessments and Scantron Performance assessments for students in grades 2-9. We are very challenged by the demand on our computer labs and having to remove students from instructional periods to conduct anywhere from 1.5 hr. sessions up to 6 hours of testing. Needless to say, these are broken up into 45 minute testing periods. We have purchased a portable/wireless laptop lab at a cost of approximately $60,000. What other technology tools are available to schools that will allow us better internet access in a wireless enviroment? We cannot afford multiple wireless labs at this cost, and yet find our available technologies are inadequate. Thank you!

Randy Bennett:
For multiple-choice and other tests that are simple to present and respond to, one option is personal digital assistants. Several companies make PDAs and/or software for delivering tests.


Question from Judy Reynolds, teacher, Indiana School for the Blind:
My questions concern computer-generated testing with blind or visually impaired students.
1. Our students, though each having extended time granted during “paper/pencil” testing documented in their IEP’s, are locked in a time constraint and must finish the test in the same amount of time as their sighted peers.
2. Some students need large print; some need speech-generated output; some need screen enlargers; some need contrast changes; some need print read line by line with no other visual “noise” around; and even with all these adaptations...students still cannot “see” punctuation marks, italacs, or shaded sections.
3. Depending on the eye condition, there can be other obstacles in taking an on-line test. For example, a student with nystagmus has difficulty with a split screen and trying to use a moving cursor. A student with albinism needs the lights off, while a student with any retinal degeneration needs ample lighting. And, for some, visual acuity can vary from day to day.
4. Totally blind students MUST have some device for voice output. If that is acceptable, then are students being tested for reading comprehension or listening comprehension?
5. We then could discuss our students who are deaf-blind and need refreshable braille in order to “read” the test.
All these adaptations are expensive and cost-prohibited in most schools.

Have other schools with blind populations solved some of these issues? Thanks for the opportunity for input.

Randy Bennett:
One group that’s making progress in this area is the Kentucky Department of Education, which administers CATS Online as an accommodation for visually impaired students taking the Kentucky Core Content Assessment. You might want to contact them. http://www.kde.state.ky.us/KDE/Administrative+Resources/
Testing+and+Reporting+/District+Support/CATS+Online+Assessment/
default.htm


Question from Ginny Garrison, Reading Plus, private dyslexia/ reading specialist:
How can we get parents to be as comfortable with computer testing as person to person, one-on-one testing for disabilities, when they feel that part of the problem with their child is test-taking skills?

Carole Givens:
We did a demostration with a practice test and showed any interested parents the tools that were available to help the students take the test like calculators, highlighters, the computer program reminding students that they had skipped a question, etc. It demostrated how much easier it was for students who might forget to go back and answer a question that they missed because there is no reminder on a paper test.


Question from Dick Schutz, President, 3RsPlus, Inc.:
Since “grade level” is established by items of varying difficulty, isn’t that inherently items “out of level”? Isn’t the tangle over “grade level” impeding information regarding “What is our children learning?”

Kevin Bushweller:
An adaptive test can be made to pose questions that are within a grade level, based on state definitions and standards. The controversy over computerized adaptive testing arose because students in some states--specifically Idaho and South Dakota--could actually be answering questions that were not part of their grade level academic standards.


Question from Betsy Sudhoff, teacher, Warsaw High School:
If a school has a wireless network, what can be done during online testing to be certain that the testing is secure?

Randy Bennett:
That’s a very good question and I don’t know the answer. I recall that wireless networks allow different levels of security. I’d certainly want to consult an expert. At the least, you’d want to use the highest encryption level, which I think is 128 bit; you’d want to change the encryption key very frequently; you’d wouldn’t want to electronically broadcast the presence of the wireless network (which network software often does by default); and so on.


Question from Gennie Pfannenstiel, Associate Professor of Education, Central Methodist College, Fayette, MO:
Do you see computer-based assessment becoming the standard, thus diminishing the role of portfolios or performance based assessment?

Carole Givens:
I see computer-based assessment as just another tool. I don’t think that other types of assessment will disappear. I believe authenic assessment plays an essential role in evaluating the “whole student”.


Question from Bill Cherico, HS AP, Yonkers, NY:
NY State sponsors the Regents exams in all major subject areas. These exams are administered state wide at the same time, same day. How can computerized testing facilitate this mass undertaking?

Randy Bennett:
One way might be to administer the exams within a short time window, say a couple of weeks. To enhance security, comparable tests could be assembled from a large pool of questions, so that students taking a test would not know what questions they would get even if their friends remembered some of the questions they got.


Question from Bob Brewster Consultant:
Urban poor schools are being fit for computer labs under NCLB but if the poor quality teachers can’t use them they sit idle. Urban kids don’t have computers in the home. How can computer testing be fair to all?

Kevin Bushweller:
You make good points. As it is, the lack of teacher training and quality hardware introduces many fairness issues that have to be resolved.


Question from P. Kathleen Shigemi, Teacher, Lethbridge Collegiate Institute:
As a computer teacher and one with a master’s degree focusing on virtual learning, there are some problems with on-line testing. Quizzes and self-tests are fine, BUT students are resourceful, especially those who are reluctant to take personal responsibility for their own learning. How will you determine who is actually at the computer completing the tests? How will you know whose work you are actually evaluating? Is the suggestion that computer testing will replace all forms of evaluation? What kind of weighting or value would these tests have? How will you control student sharing of information if they are permitted to write at different times?

Carole Givens:
The program I use gives each class a different test from a pool of selected questions. I have the class take the test all at one time. If a student goes out of the quiz for any reason, they are kicked out of the test and I have to start the test for them again. I don’t think online testing should be the only method of assessment. I look at it as another tool that is available to me. The students have a login name and number, but they can cheat--just like with pen and paper. I have only used it in my classroom for quizzes. I was involved in using online testing for the Virginia Standards of Learning and it was quite successful.


Question from Howard C. Mitzel, Research Director, Pacific Metrics:
Even beyond the difficult cost issues, high stakes computer-based testing in the schools requires a level of technical security and sophistication that would seem difficult to meet by most school district personnel in the near future. For example, testing machines need access to internet browsers disabled. I noticed NAEP is using (at least in part?) laptops brought into the schools. Does bringing computers into the schools appear to be the most viable solution for large-scale high stakes assessment program?

Randy Bennett:
Bringing computers into schools is one option but it’s an expensive one. Another option that some testing agencies use is to employ software that locks down the school computer so that the test controls the machine. That means the student can’t switch out of the test, can’t print, and so on.


Question from Ken Gardner, Teacher, San Bernardino City Unified SD:
As an English teacher, I have concerns as to whether the lessons learned in a traditionally based classroom will translate to skills on the computer well enough to be fair to all of the stakeholders (student, school, district and parents). Could you please address these issues? If there were enough computers for all students to use in lessons, I would imagine this would be less of a concern, however that is not the case in most classrooms. Thank you.

Carole Givens:
We have not found that to be a problem. Students are quite adaptable with computers. It may take a little longer to explain the testing process, but our students enjoyed the fact that it was a different way to test and probably spent more time reading the questions and reacting to the answers. They also liked using the different tools that were available in the testing program. One of the smaller counties in my state did all of their Standards of Learning Tests on the computer except for Geometry and there is less than 100 computers in the school, but the testing went quite well and was a succcess.


Question from Derrick L. Clady, graduate student (former):
So what if I can get scores back faster; shouldn’t I be more concerned with whether or not a child can THINK of (and understand) the contexts and controversies within a given subject? In short, technology cannot contend with individual human thought or expression. Spend the money on what it should really be spent on: the student and the teacher.

Kevin Bushweller:
We’re not suggesting that simply because results are returned faster that computer-based testing is good. The questions themselves have to be well crafted, and trigger the reflective and analytical thinking that you mention. The problem with paper testing, many educators told us, is that the results often take so long to get that they cannot use them to improve their instruction.


Question from Jason Schwartz, Development Manager, Pacific Metrics Corporation:
Many companies, including Pacific Metrics, have products that allow for the online writing and scoring of essays and other text-based responses. Online assessment will take a real leap forward when the technology is available to allow for the online entry/submission and scoring of open-ended mathematics work such as student-constructed graphs, student-drawn diagrams, student scratch work, etc. What, technically, needs to happen in order for online testing of mathematics to go beyond multiple-choice, numerical free response, and text-based answers?

Randy Bennett:
We’ve done a lot of work on this problem--student entered graphs, math expressions, and step-by-step algebra solutions. I think we made some progress, which hopefully will appear in a real test in the not too distant future. The biggest problem with respect to expressions and step-by-step solutions is that almost any you can find to enter math symbols on computer is not as easy as writing them with a pencil and paper. Given that, I think a solution may be in pen-based tablets, like the new microsoft e-books, once they recognize math symbols.


Question from Roberta Akalin, Counselor, Kenosha Wisconsin:
To have computerized testing presumes that all students and all teachers have access to a classroom set of computers in a user-friendly time schedule. Our high school of 2200 students has over 300 computers--labs in business, in the library/media areas, and in the math departments. This is nowhere near enough to accommodate the technology quest.

Carole Givens:
I agree, but I believe that you will see that changing in the near future. All high school and middle school students in my county are given a computer that they can take home and use during the school year. I believe you will see more and more schools/states doing the same thing.


Question from Bill Culpepper, Consultant, TechLogic Solutions:
How do you see the current and future trends of operating systems used for CBT between Windows, Macs, and platform independent HTML based tests?

Randy Bennett:
The major trend is toward designing assessments that will work on a variety of common platforms. After all, variety is what characterizes the installed base of computers and operating systems that exist in schools today. But that said, it’s a challenge to design a delivery system and a test that works in the same way on all of those hardware and software variations.


Question from Janet Adams, Principal, Categorical Director:
Comment: I use computer adaptive testing three times a year for our Title I district benchmarks. I have found the adaptive model is outstanding in drilling down to the needs of our students’ gaps in the California Standards. Before we administered a paper/pencil grade-level assessment.
The results were disasterous. The scores proved that the students were not at grade level. However, I wanted the teachers to know exactly what grade-level standards each student obtained as well as the standard failed. I use an outstanding computer- adaptive model that also provides reports that groups the students according to one objective at a time.

The teachers are able to immediately design the intervention groups with a quick click of a key. A study guide and common assessments can be created for that intervention group. NCLB states that our district and schools will show AYP. Immediate intervention for each student and help in specific standards-based materials are developed through the computer software and align with the reports from the computer adaptive test. I read the article that one barrier for my classroom teachers would be that computers would not be available to test all the students. This same company designed a PDA ($100/PALM) program which the software downloads the teacher-designed assessment directly to the PDA’s. The students take the test, point to the infared station on top of the teacher’s computer and the program immediately design’s reports for the teacher. Dissagragation of DATA is automatic. If the classroom does not have PDA’s, we can use a specially designed scantron sheet which the students zip their answers through a small scanner networked to the same computer. All answers scanned and reported in a few minutes.

That is an exciting and helpful use of technology so that NO Child will be left behind.

Kevin Bushweller:
That sounds very interesting. It sounds like you’ve had quite a bit of experience with adaptive testing. Do you think the future of computer-based testing will be determined more by tablet-sized computing devices rather than personal computers?


Question from Brenda Webb, Administrator, Sullivan County Schools:
Technology is a significant component of NCLB, but I have not seen an instrument that measures technological literacy levels relative to effective integration. When can educators expect to have an instrument that will measure how students perform in the field of technology? Other than using E-TOTE and STaR Chart reports in Tennessee, I do not think this has been adequately defined, although an emphasis on technological literacy has been around school venues for a long time.

Kevin Bushweller:
Yes, there isn’t much in the way of testing technology skills. However, there is some movement in this area. According to our report, three states--New York, North Carolina, and Utah--all test students’ technological literacy based on state tech standards. And Pennsylvania plans to implement a technology test in 2004-05.


Question from Ed Kowieski, Acct. Mgr. Scilearn:
I work in with many special education personnel who would love to have a VALID computerized assessment for reading/learning difficulties such as CAPD, Dyslexia, ADD/ADHD, etc.--What is being done to develop a tool that would allow them to assess groups at a time vs. the individualized one-to-one assessments currently available??

Randy Bennett:
I can’t suggest anything in particular but I’m not up to date on this field. You might contact the National Center for Educational Outcomes (NCEO) at the University of Minnesota, which specializes in this area. Martha Thurlow is their director. Bob Dolan at the Center for Applied Special Technology (CAST) is also very knowledgeable.


Question from Manny Torres M.Ed Chandler,Az:
Will computer based testing increase the digital divide when poor schools can not access the technology?

Kevin Bushweller:
It could if wealthier schools obtain better computer hardware while poorer schools struggle with older computers that tend not to work as efficiently. If the student-to-computer ratios don’t improve in poorer schools, that could lead to a widening of the digital divide. However, our report actually showed that the computer availability gap between wealthy and poor students continues to close, and is not nearly as wide as it was just a few years ago.


Question from Derrick L. Clady, former graduate student (Education Curriculum Studies):
With reference to adaptive testing, how would such an exam be an accurate representation of how a student learns as well whether the student has retained information? Would not such an exam pose more of a proficiency gap between students? High stakes testing is similar to gaining confessions through fear of torture; are the confessions [exam results] accurate at all?

Kevin Bushweller:
Adaptive testing experts argue that this type of assessment will actually identify more closely the skill level of each student. One of the problems with traditional tests is that they are based on an average student’s skill level. As a consequence, they tend to underestimate the skills of high-achievers. Such tests run the risk of getting little information on the skills of low achievers.


Question from Steven Glazerman, senior researcher, Mathematica Policy Research:
Group average performance is useful to policy makers for accountability and evaluation of interventions, whereas test that are precise at the individual level are useful to parents and teachers for diagnosis and placement. How promising is computer testing for those who are interested in the *average* performance of a group as opposed to the *individual* performance of any one child?

Carole Givens:
In my experience with online testing, I have known both individual performance and the average performance of the group. I think both types of statistics are usually available.


Question from John Fallon, Choral Director, Walhalla High School:
Are these exams going to be SCORM compliant?

Randy Bennett:
SCORM is a specification for developing, packaging, and delivering educational courseware. The goal is to have your manufacturer X’s system play manufacturer y’s courseware as well as everyone elses. If online test developers are smart, they’re going to adopt some such standard so that X’s test items can play in Y’s delivery system and be scored by Z’s automated essay scoring program.


Ron Skinner (Moderator):
FYI- SCORM is Sharable Courseware Object Reference Model.


Question from Shelley Richardss, Dir. of Trg, EPIC:
Are there national standards for computer based testing?

Kevin Bushweller:
No--not specifically for computer-based testing.

But the International Society for Technology in Education (ISTE) has developed general technology standards for K-12 schools.


Question from Connie Schofer, VP, Media Management Services:
How do you see wireless access and hand helds effecting the time frame for adoption of on-line testing in the K-12 market?

Randy Bennett:
As computers get cheaper, smaller, and more powerful, they become easier for schools to purchase. So the effect, I think, will be to shorten the adoption time frame for some kinds of online testing.


Question from Connie Schofer, MMS:
So many fads come and go in education and focuses change with each administration. What is your evaluation of the “staying power” of computerized testing? Flash in the pan that won’t “pan” out or here to stay?

Carole Givens:
I definitely think it is here to stay. Many states have invested a lot of time, money and energy in using online testing for their “high stakes” test. Also, computers have already demonstrated an important part of both teaching and learning so testing is the natural sequence of events.


Question from :
What kind of software is available to make computer-based testing accessible to hearing or sight impaired students?

Kevin Bushweller:
Various technologies are evolving. I would suggest you contact the Center for Applied Special Technology (CAST) in Wakefield, Mass. Its purpose is to expand educational opportunities for students with disabilities through technology.


Question from Lan Neugent, Assistant Superintendent for Technology, Virginia DOE:
We have conducted comparability studies that seem to indicate that our on-line and pencil and paper tests are comparable in a high stakes environment. Are other states or vendors doing comparability studies for their on-line testing? If so, what are they finding?

Randy Bennett:
The basic question such studies address is whether the scores from computer-based tests and paper ones mean the same thing, which is important if a state is offering a test in both modes. States are doing such studies. The two I’ve read, Oregon’s and North Carolina’s, found that at the elementary level the two modes produced scores that didn’t mean the same thing. At the secondary level, the Oregon study suggested that the modes were more comparable. But that’s only the results of two studies. We need more research on this very important topic.


Question from Scott Weersing, professor, National University:
Is the use of computers for testing in compliance with No Child Left Behind’s technology goals?

Kevin Bushweller:
NCLB technology goals want students to become proficient users of technology. In that sense, learning how to take computer-based exams--which are quite common in higher education and the corporate world--would show that students are proficient in that skill. But whether that meets the specific tech goals of NCLB is something we’d need to take a closer look at.


Question from Michael Bakatsias, Director of Technology Highland Central School District, NY:
How are schools handling the administration of these tests? Do they take place in a lab environment or in the classroom. We have both at Highland, a 30 station computer lab and 3 computers in each classroom. How does this model fit with these assessments?

Carole Givens:
In my county, each high school and middle school student is given a computer by the school system. We lease the computers from Apple, so the students use their own computer for the testing. Other school systems in our state use labs, but I think you would need more than one lab to do any significant amount of testing.


Question from Larry Fruth, Knowledge Manager, Ohio SchoolNet:
Do states that have some type of online asessments automatically feed this information into their student information/data systems for local, state or federal reporting?

Kevin Bushweller:
That’s the eventual goal, but in most cases, states are still working out how to do that.


Question from Jason Schwartz, Development Manager, Pacific Metrics Corporation:
Thank you for your response to my earlier question. I’d like to follow up a bit, perhaps in a slightly different direction. It would seem that online testing would make the testing of certain objectives impractical or impossible (for example, one Maryland High School standard requires that students perform geometric constructions using a compass and straight-edge). Do you expect that standards like this will simply fall off test blueprints in the push to move online, or do you expect that these same standards will in essence kill or delay the movement of certain tests to an online format?

Randy Bennett:
I would certainly hope that whatever content standards educators and the public deem of value appear on state assessments. We should take for granted that, for some standards we’ll continue to use paper tests, whereas for others we’ll use online assessments.


Question from Nan Drinkard, Certified Teacher, Software Consultant:
Why have the tests been created to be flexible? Shouldn’t they be a standard? Let’s consider a plumb line...it does not deviate because there is only one true line.

Kevin Bushweller:
I believe you’re referring to adaptive testing. The makers of these tests make them flexible (meaning test questions vary in difficulty) because they believe that flexibility allows for digging deeper into each student’s base of knowledge. However, the Education Department is still concerned that too much flexibility in these tests presents problems. Even so, the Education Department has not entirely ruled out the possibility of using adaptive testing.


Question from S. Templet, Para Educator, Ascension Parish School Board:
What about the students/group with modifications such as oral testing? Would each child need someone to read the test to them or would small group computer testing be allowed?

Kevin Bushweller:
Because this is still an evolving field, specific accommodations like that are still be worked out.


Question from Terry Sacket, Sci. Supervisor, Enid High School (OK):
(1) Are there presently any turn key systems/applications that districts may obtain that are web-based and allow teachers to create their own archived test bank? If so can they be shared between teachers? (2) Are there performance-based assessments built with portfolio systems. I know of the new ALCA server being used in TX, OK and elsewhere but are there others as well?

Randy Bennett:
There are quite a few software programs available for item-banking, ranging from the very sophisticated that you obtain from testing organizations to the type that come on CD-ROM and run on your local machine. If you check the web just by searching on “item banking,” you’ll find a few.


Question from Larry R. Crabbe, Director, Research and Evaluation, Elk Grove USD, CA:
In a world where “high stakes” assessment programs are continually challenged in court ... on what grounds are computer-based assessment programs likely to be challenged by their opponents?

Kevin Bushweller:
Probably the most likely argument would involve equity issues, such the availability of quality computers, or the disparities between youngsters who know how to use computers well and those who don’t.


Question from Scott Marion, Senior Associate, Nat’l Center for the Improvement of Educational Assessment:
Computer-based testing and computer-adaptive testing offer many promises, but I have yet to see either of these two approaches include open-response type questions that ask students to wrestle with complex problems and posit solutions. Have you seen such creations and have you seen them used in large-scale settings? Thanks.

Randy Bennett:
Yes, I have. The two most impressive are the National Council of Architectural Registration Boards’ (NCARB) Architect Registration Examination and the United States Medical Licensing Examination. The first includes a section in which the examinee must use the computer to design a building or building unit. The second includes a section in which the examinee is presented with a patient’s history and presenting problem and must manage the case--i.e., decide what tests to run, what treatment actions to take, and what diagnosis to give. These are very sophisticated assessments that couldn’t easily been the traditional way. Everyone who wants to be licensed as a physician in the US takes that test. Everyone who wants to be licensed as an architect in the US, I believe, takes NCARB.


Question from Connie Schofer, MMS:
What type of training do those administering computer based tests receive prior to testing? Are there any standards for this training?

Carole Givens:
I went through a 4-hour training session. Teachers at my school were then trained by me, but I stayed in the classrooms that were doing the Standards of Learning Tests throughtout the testing process. Also, our teachers are trained on how to use the online test-maker that is part of Blackboard.


Question from Paulette Johnson, Business Education Teacher, Cedar Shoals High School:
Do you see a move towards national online tests in specific subject areas at the high school level that will meet graduation requirements at a national level?

Randy Bennett:
I think there will be a move toward state online tests in specific HS subject areas--so called “end-of-course” tests--that will meet state graduation requirements. (“National” tests are a political hot potato.) FYI, here’s an article that explains why I think we’ll be moving that way for testing generally. http://www.bc.edu/research/intasc/jtla/journal/v1n1.shtml


Question from Ed Sloat, Director of Research, Peoria Unified School District, Glendale, Arizona:
Could you comment on the type of “improve” reports/information on student learning that CAT can provide to classroom teachers (the underlying IRT models which CAT utilizes provide the opportunity to utilize testing information in a much more exacting way to shape instruction improvement, interventions, ...)

Kevin Bushweller:
CAT can show more precisely what grade level a student’s knowledge is at. So you would know if a 3rd grader is ready for 5th grade work. Conversely, you might see that a 3rd grader is having trouble mastering even 2nd grade work.


Ron Skinner (Moderator):
FYI- CAT- Computerized Adaptive Testing, IRT- Item Response Theory


Question from Ed Sloat, Director, Research, Peoria Unified School District, Glendale, Arizona:
Is there a resource (on-line) that educators who are thinking of implementing on-line testing can go to to learn about various CAT software programs?

Randy Bennett:
Yes, try ERIC and look under computer-assisted testing:

http://ericae.net/nintbod.htm


Question from :
Folks, It seems difficult for education to embrace technology. Technology and testing is a perfect example of a model that is ultimately cheaper, faster, and better in many ways than the traditional testing mode, yet educators have been slow to move forward. To what do you attribute this?

Randy Bennett:
One could make the case that it’s a matter of doing what is familiar and comfortable. As a new generation of teachers comes into the workforce--one that has been raised with Instant Messaging, the WWW, and email--the familiarity and comfort levels will have changed and, hopefully, so will the readiness to embrace technology in testing and instruction.


Question from Connie Schofer, MMS:
A follow up on test administrator training. Who is offering this training. Part of a package or provided separately?

Randy Bennett:
In my experience, for state assessments, the training is usually either by the state or by the testing vendor under contract to the state.


Question from Larry Crabbe, Director, Res/Eval, Elk Grove USD (CA):
Do any existing systems include the capability to produce “running” or cumulative norms for the agencies using them? Do they provide “item analysis” type information for test development, etc.?

Randy Bennett:
If you mean “real time” norms, I don’t know of any that do now. Some of the testing companies present students’ responses to open-ended questions to human judges, who score those responses online. The administrators who supervise that scoring can see in real time how that scoring is going--that is, whether scorers are agreeing with one another in the judgments they make, which questions are producing lower agreement than other questions, and so on.


Ron Skinner (Moderator):
Thank you everyone for your excellent questions, and thank you to our guests for your thoughful responses. The chat is now closed. There will be a transcript posted later today.


The Fine Print
All questions are screened by an Education Week online editor and the guest speaker prior to posting. A question is not displayed until it is answered by the guest speaker. We cannot guarantee that all questions will be answered, or answered in the order of submission. Concise questions are encouraged.

Please be sure to include your name and affiliation when posting your question.

Education Week maintains Live Chat as an open forum where readers can participate in a give-and-take discussion with a variety of guests. Education Week reserves the right to condense or edit questions for clarity, but editing is kept to a minimum. Questions may also be reproduced in some form in our print edition. We attempt to correct errors in spelling, punctuation, etc. In addition, we remove statements that have the potential to be libelous or to slander someone. In cases in which people make claims that could be libelous, we will remove the names of institutions and departments. But in those cases, we will not alter the ideas contained in the questions.

Please read our Privacy Statement and Visitor Agreement if you have questions.