A STEP in the Right Direction

NYU Polytechnic School of Engineering Graduate Student Takes Top Honors for Paper Proposing a Scalable Testing and Evaluation Platform

Online labor markets like oDesk and Elance have made it easier than ever for employers and prospective employees to find one another. Such services give companies access to a massive and diverse pool of willing workers and provide individuals with the opportunity to chart their own careers and pursue gratifying jobs. One major challenge, however, is to build effective assessment systems that can reliably and efficiently evaluate the skills of the participants in order to make the best match between worker and job.

Maria Christoforaki, a doctoral candidate in the Department of Computer Science and Engineering at the NYU Polytechnic School of Engineering, realized that existing approaches to the problem have major drawbacks. For example, while many platforms already allow users to take online tests and verify their skills, online testing without supervision invites cheating, since questions frequently leak out and become readily available to applicants in advance. Furthermore, skills like programming require the tests to be frequently updated as the technology advances.

Along with Panos Ipeirotis, an Associate Professor and George A. Kellner Faculty Fellow at the NYU Stern School of Business, Christoforaki developed a scalable testing and evaluation platform (STEP) that allows for continuous generation and evaluation of test questions. STEP leverages already available content on Question Answering sites and re-purposes those questions to generate tests. The system uses a crowdsourcing component for editing the questions, along with automated techniques for identifying promising threads. This continuous question generation decreases the impact of cheating and creates questions that are closer to the real-life problems that the skill holder is expected to solve. STEP also identifies the questions that have the strongest predictive ability in distinguishing workers with the potential to succeed. The system, which is already being tried out commercially, generates questions of comparable or higher quality compared to most tests in use and does so at a lower cost than licensing questions from existing test banks.

Christoforaki and Ipeirotis presented their findings at the Association for the Advancement of Artificial Intelligence’s 2014 Conference on Human Computation and Crowdsourcing (HCOMP), where they garnered the best-paper award from a large and robust pool of submissions.

"Maria is smart, energetic, and very independent,” says Professor Torsten Suel, who oversees her graduate work. “She is a wonderful Ph.D. student to have, and I am thrilled she received this honor."