Research

The Department of Computer Science and Engineering at NYU Polytechnic School of Engineering is active in research in a number of key areas of Computer Science. Our research has been funded by grants from government agencies such as the National Science Foundation, NASA, the Office of Naval Research, the Air Force, and the New York State Office of Science, Technology, and Academic Research, and private companies and foundations such as IBM, Hewlett-Packard, AT&T, the Sloan Foundation, Panasonic, Intel, and Verizon.

Our research strengths are in:

Internet and Web Research
Cybersecurity
Graphics, Visualization, Vision and Image Processing
Theoretical Computer Science

Internet and Web Research

Faculty: Katherine Isbister, Keith W. Ross, Torsten Suel, Joel M. Wein, Yong Liu, Shivendra Panwar, Yao Wang

Today's Internet is arguably the largest engineered system ever created by mankind, with hundreds of millions of connected computers, communication links, and switches; hundreds of million users who connect intermittently via cell phones and PDAs; and devices such as sensors, webcams, game consoles, and picture frames being connected to the Internet. The Internet and the Web continues to grow and transform itself, with new developments in cloud computing, wireless access, peer-to-peer applications, Web search, online social networking, and multiplayer online games. The School of Engineering has been a leader in the areas of computer networking and Web research for many years. Our current strengths include:

Peer-to-peer Networking

In the peer-to-peer paradigm, users pool their computing, storage, and bandwidth resources, creating new and powerful shared applications. At the School of Engineering we are developing new live and on-demand P2P distribution applications, measuring and studying current applications, and investigating incentive and security issues surrounding P2P applications.

Cloud Services and Networking

Google, Microsoft, Yahoo, Amazon, Akamai, Limelight, Facebook and many other companies are deploying massive data centers throughout the world, which will likely transform computing as we know it today. In the future, the "cloud" may manage all of our email, files, data, and applications. Our "computers" may simply become hand-held extensions of the cloud. At the School of Engineering we are examining how to design the cloud-computing infrastructures of the future.

Web Search, Web Mining, and Social Networks

The Internet, and the web in particular, draw users with two main attractions: content and communication. By content we mean the vast amounts of information and entertainment that is now available online. By communication we mean the many opportunities for users to interact with each other and with the content. However, to fully utilize these attractions we need tools that allow us to find content of interest and users that we want to interact with, and suitable mechanisms that enable interesting and robust online communities. At the School of Engineering we study how to design future generations of web search and navigation tools, how to scale search engines to ever larger data sizes, and how to structure online social communities (such as YouTube or Wikipedia) for safe and efficient user interaction.

Multiplayer Games and Online Virtual Worlds

Games have become not just a major leisure past-time, but have also shaped expectations for how people communicate and socialize online. A recent survey reported that 97% of all American youth play video games. Developing and maintaining 3D worlds that operate at this scale and volume provides myriad challenges for computing. Designing the avatars, interaction paradigms, and evoking player emotions and social connections in new ways that make full use of this capacity provide defining challenges for human computer interaction specialists. Games have also set a standard for engaging, dynamic real-time experience that has challenged educators and led to initiatives to investigate ways to tap the power of games in creating learning experiences, as well as monitoring the success of such experiences through instrumentation and metrics. The School of Engineering faculty engage many of these questions, including: distributed systems design to support online gaming and virtual worlds; gaming HCI and user experience/usability; metrics and data analysis and visualization; and the design and instrumentation of games for learning.

Cybersecurity

Faculty: Phyllis Frankl, Ramesh Karri, Nasir Memon, Marco Pistoia, Keith W. Ross, , Joel M. Wein

Cybersecurity is undoubtedly one of the most important areas of computer science and engineering. The School of Engineering has a long history of research and education in cybersecurity. The Information Systems and Internet Security (ISIS) Lab serves as the focus of our research and education activities.

We also have strong educational programs in cybersecurity. We offer a dozen different courses in security. In 2002 we received the NSA Center of Excellence in Information Assurance designation in 2002. This designation was renewed in 2005 and again in 2008. In 2008 we also were designated a Center of Excellence in Information Assurance Research by the NSA. Since 2003, our curriculum was certified to meet the National Training Standard NSTISSI-4011 and NSTISSI-4013 for Information Systems Security Professionals set by the Committee on National Security Systems (CNSS). In 2008 we also obtained approval for 4016 certification.

Vulnerability Analysis

Errors in design and implementation of software are a major source of security vulnerabilities. Hackers may exploit such vulnerabilities to get unauthorized access to private data, to direct users’ browsers to malicious web sites, or to execute malicious programs on the application’s host computer, for example. At the School of Engineering, we are developing automated techniques to analyze software to detect vulnerabilities, to automatically fix certain vulnerabilities, and to test for security related vulnerabilities and other bugs. One particular focus of current research is analysis, transformation, and testing techniques for programs that interact with databases.

Peer-to-Peer Security

Today the P2P paradigm is being used for a wide range of applications, including file sharing (BitTorrent), live video distribution over the Internet (PPlive), and voice-over-IP (Skype). Due to the decentralized nature of P2P applications, they are highly vulnerable to attacks and can also serve as engines for large-scale DDoS attacks. At the School of Engineering we are studying both attacks on and attacks from P2P systems. We are also studying how current P2P systems can be defended, and how future systems can be designed to be more resilient. The work involves measurement of real-word P2P systems, designing new protocols and architectures, and mathematical modeling.

Multimedia Forensics

In the analog world, an image (specifically a photograph) has generally been accepted as a "proof of occurrence" of the event it depicts. In today's digital age, we can no longer take the authenticity of images and videos for granted, be they analog or digital. Image and video forensics, in this context, is concerned with uncovering some underlying fact about an image or video. Our research on image forensics has come to focus mainly on three types of problems: 1) Image origin/type identification; 2) Image source identification; and 3) Image forgery detection.

Biometrics

Biometrics is finding increased adoption over the past few years in a variety of applications including authentication and identification. However, there are widespread security and privacy concerns about the dangers of using biometric data in a ubiquitous and unchecked manner. Although there has been a lot of research done over the past few decades on developing techniques for capturing and matching biometric data, security and privacy issues have received comparably less attention. The main objective of our research is to develop simple, practical and provably effective cryptographic techniques for the security and privacy of biometric data.

Watermarking and Digital Rights Management

Watermarking techniques have been primarily aimed at multimedia applications where the models and constraints have similar characteristics. Today a new generation of applications is emerging that have different goals, requirements and constraints. Our research takes a multiple-pronged approach that applies the ideas and principles of watermarking to physical-layer communication and network-security problems. In addition, we are analyzing the security aspects of many watermarking applications for digital rights management and improving existing techniques and protocols with reference to the prevalent security paradigm.

Wireless Security

The burgeoning popularity of wireless devices and gadgets brought new services and possibilities to users. There are many current everyday usage scenarios where two or more devices need to “work together.” Other emerging scenarios that involve sensors and personal RFID tags are expected to become commonplace in the near future. Since wireless communication is easy to eavesdrop upon and manipulate, before they can work together, devices must be securely associated or “paired.” Our research addresses this fundamental problem of securing wireless communication in a variety of settings.

Steganography

Steganography refers to the science of invisible communication. Unlike cryptography, where the goal is to secure communications from an eavesdropper, steganographic techniques strive to hide the very presence of the message itself from an observer. In our research on image steganography and steganalysis we have contributed in designing novel steganographic and steganalysis techniques, with incorporating information-fusion techniques into the steganalysis problem, and by developing techniques for cover-image selection.

Fault-Tolerant Distributed Cryptography

Cryptography is based on the assumption that cryptographic keys are readily available as well as secret. However, in practice, this assumption is often invalid. Threshold/distributed cryptography is a tool that allows for distribution of keys and cryptographic operations among multiple nodes, providing improved availability and secrecy. Our research focuses on design, development and evaluation of efficient distributed cryptographic protocols with an emphasis on building fault-tolerant online security services (e.g., certification), user-centric services exploiting social networks and decentralized key management in mobile ad hoc networks.

Usable Security

It is a well-accepted fact that human users tend to be the weakest link in the security of a computer system. For example, users choose weak and short passwords, re-use the same passwords across multiple sites, fall prey to various social engineering attacks and ignore security warnings. Our research aims at studying the weaknesses and strengths of human users and incorporating the latter into secure system design. Currently, we are developing novel ways of strong user authentication (e.g., graphical passwords, mobile-phone assisted authentication) and user-aided device authentication.

Graphics, Visualization, Vision, and Image Processing

Faculty: Yi-Jen Chiang, Nasir Memon, Edward K. Wong, Yao Wang, Ivan W. Selesnick

Today computers and computer-generated images touch many aspects of our daily life. Computer imagery is found on television, in movies, in weather reports, in medical diagnosis, and during surgical procedures, among others. The combination of computers, networks, and the complex human visual system, through computer imaging, has led to new ways of displaying information, seeing virtual worlds, and communicating with people and machines. The School of Engineering has been strong in several areas of computer imaging. Our current strengths include:

Computer Graphics and Visualization

Computer graphics is concerned with the technologies used to produce images by a computer. Visualization concentrates on the techniques for creating images/animations through computer graphics to display information, and has been one of the most direct and effective ways of understanding large amount of data such as those from medical diagnosis or computer simulations in scientific/engineering applications. In recent years, one of the major challenges in graphics and visualization has been how to deal with the large dataset sizes. At the School of Engineering, we have been developing high-performance graphics and visualization techniques for large datasets, along three major paradigms: (1) out-of-core techniques (which are algorithms specifically designed to reduce the computational bottleneck---the I/O communications between main memory and disk -- in the scenario where the datasets are too large to fit in main memory and must reside on disk), (2) graphics compression, and (3) multi-resolution/level-of-detail techniques (which render datasets at just the right level of details to achieve both high image quality and fast computing speed). We have been applying and integrating these paradigms to develop many efficient approaches for dealing with large datasets in graphics and visualization problems.

Computer Vision

Computer vision deals with the understanding and interpretation of the contents of digital images or videos. At the School of Engineering, we have been developing robust techniques and algorithms for many real-world vision problems. These include digital image forensics for security applications, document image watermarking for document control and security, fingerprint image analysis for biometric applications, analysis of computed tomography (CT) scans for better cancer treatments, robust techniques for image communications through wireless or Internet, feature extraction from images for image retrieval applications, feature extraction from video for multimedia and surveillance applications, among others.

Image Processing

Image processing is any form of signal processing for which the input is an image that is treated as a two-dimensional signal to be processed; the output can be either an image or a set of characteristics/parameters about the image. At the School of Engineering we are active in image processing research, including image and video compression, image/video transport over noisy channels, multimedia signal processing, medical image processing, digital signal processing, and wavelet-based image/video processing.

Theoretical Computer Science

Faculty: Boris Aronov, Lisa Hellerstein, John Iacono

Theoretical computer science addresses the question of what can and cannot be accomplished using a computer. It includes the development of efficient algorithms for solving specific computational problems. It also addresses the intrinsic difficulty of computational problems, and includes the effort to determine, for individual problems, the minimum amount of computation that would be required to solve the problem.

Data Structures

This area studies ways of organizing data in order to enable operations on the data, such as searching and/or updating it, to be performed quickly. Faculty research includes the study of data structures for data in main memory, data on the disk, and data interacting with the memory hierarchy.

Computational Geometry

Much of the data that is currently processed by computers is geometric in nature. Applications involving geometric-type data include computer graphics, path planning in robotics, image analysis, solid modeling, and geographic information systems. Computational geometry studies the way to efficiently perform common computations on geometric objects, such as determining where subway exits are located, given a city block map and a subway map. Computational geometry is also related to combinatorial and discrete geometry. Surprisingly, computational questions about geometric objects are often connected to mathematical questions seemingly unrelated to anything computational. We are active in developing algorithms, proving theorems on computational and combinatorial bounds, and performing experimental work on applying the developed algorithms to real-world geometric computing problems such as those in computer graphics and visualization.

Computational Learning Theory

A major challenge in artificial intelligence is to develop ways for computers to learn, generalize, and adapt. Computational learning theory is the study of machine (computer) learning from the perspective of theoretical computer science. It asks which types of learning tasks can be performed by efficient algorithms, and which cannot. It includes the development of efficient learning algorithms, the theoretical analysis of techniques used by machine learning practitioners, and the exploration of connections between machine learning problems and problems in other areas of computer science.

Combinatorial Optimization and Approximation Algorithms

Many questions of resource allocation, such as scheduling and routing, amount to optimization problems that are believed to be computationally intractable to solve exactly. The study of approximation for optimization problems focuses on designing algorithms that run in reasonable amounts of time and deliver solutions that are provably close to optimal. We both prove theorems about our algorithms and do experimental work to apply these algorithms to real-world problems.