Skip to main content

 

about mario savvides

Marios SavvidesProfessor Marios Savvides is currently a Research Assistant Professor at the Electrical and Computer Engineering (ECE) Department at Carnegie Mellon University with a joint appointment at Carnegie Mellon CyLab. Professor Savvides obtained his B.Eng (Hons) degree in Microelectronics Systems Engineering from University of Manchester Institute of Science and Technology (UMIST) in England in 1997, his M.Sc degree in Robotics in 2000 from the Robotics Institute in Carnegie Mellon University, Pittsburgh and his Ph.D. in May 2004 from the Electrical and Computer Engineering Department, also at Carnegie Mellon, where he focused on Biometrics Identification technology for his thesis "Reduced Complexity Face Recognition using Advanced Correlation Filters and Fourier Subspace Methods for Biometric Applications". He has authored and co-authored over 90 conference and journal articles in the Biometrics area, has filed two patent applications and co-authored over 10 book chapters/articles. He is also the area editor for Springer's Encyclopedia of Biometrics. He has founded and directs the Biometrics lab at Carnegie Mellon CyLab.

[ email ] | [profile]

CyLab Chronicles

Q&A with Mario Savvides

posted by Richard Power

CyLab Chronicles: What aspect of research would you like to highlight?

SAVVIDES: Following Pradeep's vision, CyLab faculty laser-in on key-problems that will have highest-impact and make us stand-out as leaders in our chosen field of research. In that spirit, I founded and direct the Biometrics lab R&D effort in CyLab.  Most of the research projects I have selected to work on are high risk but with very large pay-off that would be extremely useful for the U.S. Government for developing technologies to fight terrorism and enhance our National Security. Many projects like ‘facial pose correction’, ‘iris at a distance’ are extremely challenging problems which are in high demand but also require a significant amount of basic research to articulate the problem and find a tractable solution that will lead to a robust real-time system solution. 

CyLab Chronicles: What technology are you working on?

SAVVIDES: My group is working on a wide range of biometric technologies. We have focused on two biometric modalities: face and iris. We are working on all aspects3D face reconstruction  of face from developing the “core” matching technology to more challenging problems, including automatic detection of facial features and facial pose correction. At our lab, my students and I are working on two approaches to tackle the pose correction problem, one using techniques such as Active Appearance Models (AAMs) which essentially fits a facial model (combined shape and texture) and this is used to produce a pose corrected, neutral expression face which can be fed into any facial recognition system. The other approach is at its early stages but we have results which are extremely promising. It works by fitting a 3D morphable model onto a 2D facial image (imagine it came from a passport, newspaper or surveillance footage), then we can take that image, generate a 3D model using the technology we are researching and synthesize a new 2D image using any pose. We are also working on other technologies, e.g., facial tracking, investigation of automatic detection of discriminating iris features (graduate student Tahei Munemoto), eye-shape characterization via Active Shape Models (graduate student Sheethal Bhat and more.

CyLab Chronicles: What are the unique attributes of what you are working on?

SAVVIDES: Using the AAM approach we have built a real-time system running at 20+fps to demonstrate the proof of concept and we are working on improving the fitting accuracy to generalize across a larger population of faces. Using that approach we have shown that we can build view-specific AAM’s so that we can un-wrap and provide a frontal image even when given a near complete profile image. These results shown above show the unique power of our algorithm. These types of technologies are exactly what is needed and missing in current automated facial tracking and surveillance systems which are trying to identify criminals/terrorists from a watchlist. In general, my group is working on ‘less-obtrusive’ biometric surveillance technologies.

CyLab Chronicles: What problem(s) does your work address?

SAVVIDES: The research my group is working in CyLab is aimed at making facial matching technology a reality.  There are many matching algorithms that have been developed and you’ll find hundreds of face recognition algorithms however, they are all useless unless the user or subject co-operates to give a good frontal pose with good illumination. And if they claim to work well, then the training set included a good set of possible pose variations of the person’s face which is not a practical scenario of interest in the real-world).

In surveillance applications, all bets are off. You have no control on lighting, no control on illumination and bad guys will also want to evade cameras, thus you need to have developed technologies that can detect and capture faces very fast in a moving crowd. Even though you may have one hundred frames of a face, there may only be one or 2 frames that contain a usable face image (that is not occluded by other faces, and of a reasonable pose angle). This is why we are developing and pushing for extremely fast face detection technology as we want to be able to capture and detect those fractions of a second where you may have a usable face image. I have a PhD student (Hung-Chih Lai) who has ported our algorithm on FPGA and can achieve one hundred frames per second (at the moment we can also do about 60fps on multi-core PC (PhD Ramzi Abiantun). He has a proof of concept demo working and once we get a larger capacity FPGA which can hold all the algorithm, we’ll have a complete demo system ready. Once the FPGA face detection H/W architecture is finalized, it can be ported to an ASIC design to a much smaller form factor that can be integrated into cameras, surveillance cameras and other platforms (mobile robots, UAVs).

That is just the first component, detecting faces and finding a suitable face to use. Once a face is detected, we need to perform pose normalization, that is where another of my PhD’s (Jingu Heo) is working on pose correction using Active Appearance Models(AAMs), once that research is perfected it will provide a pose normalized image that can be fed into a face recognition system to run a match against a database.  I have another PhD student (Sung Won Park) who is working on developing pose correction by fitting a 3D morphable model onto a 2D face image. That image can be pose corrected to run a match or it can be used as an offline tool to enroll a suspect into a database. Imagine if you had a newspaper clip of a terrorist or a snap shot of a real bad guy and of course its not a perfect enrollment image, it’s a non-frontal image, then with our research, one would be able to take the face image, run it with our system, generate a 3D model of that face image, and then rotate that model to obtain a frontal 2D image which can be enrolled in a watch-list database. While all this might simple, there are a lot of complicated and critical steps which requires computer vision algorithms to perfectly align and detect facial features from a 2D image to a set of 3D faces and morph a model. There is a lot of mathematics and a lot of research to be done to improve the fitting accuracy and then speed of the algorithm. There is a lot of overlap and integration of using the AAM approach for this method.

Iris recognition and iris biometric in general is the other main focus of the lab. Iris had gained its fame both in Hollywood in many films as early as the Thunderball James Bond film and the recent Tom Cruise’s Minority report (I think Mission Impossible, too?) for access control but also in the recent National Geographic case where an Afghan girl was identified by the FBI seventeen years later using her iris.

There are two main issues about iris, acquisition and signal/image processing + pattern recognition, while we can deal with anything on the signal processing side, we still need relevant state-of-the-art acquisition. Which is why at CyLab, I have obtained Sarnoff’s Iris-on-the-Move (IOM) system, and we are the first University to obtain this amazing iris-on-the-move acquisition device outside Government research labs!. This allows us to capture iris images at 3m stand-off distances as people walk through a portal (~20 ppl/min) and our job is to work/process the images captured under such conditions to see how we can improve matching.

A whole new research set of problems arise, like how to enhance iris quality, how to ensure iris matching interoperability across sensors? For example, in Iraq right now, our soldiers are using Securimetics PIER 2.3 (Portable Iris Enrollment & Recognition) device, and insurgents are enrolled. It would be extremely important if we can identify if any of those captured insurgents pop-up on U.S. soil, and these are the type of research problems we are trying to address. There so many more, for example: when you get an iris you also have face, yet there is little work done in the fusion or face and iris out there. How do we segment non-orthogonal irises (i.e. if you aren’t looking straight into the camera?) The list goes on, and these are the problems we are trying to obtain funding to answer. The nice trait about iris is that it is thought to remain stable throughout a person’s lifetime, so you can identify an individual across time, much more effectively than say face which changes via make-up/weight loss and aging. If I had an image of a person’s iris and he commits a crime or terrorist act, I should be able to match and find him even ten years later from that one enrollment – that is the power of iris recognition and where the future of biometric lies.

CyLab Chronicles: What are the commercial implications of your work?

SAVVIDES: Many. We already spun-off a startup called BiometriCore to commercialize some of the face and iris matching technology.  Even a great deal of the new research development, its easy to see that it will be in great demand by major biometric/defense integrators to incorporate these modules into their current systems. Modules like pose correction, face detection, facial-super-resolution, iris quality, iris non-orthogonal pose correction (the list goes on), are all stand-alone components and just fit or replace another module in the biometric system pipeline. As biometric systems are deployed more to ensure security but also increase the safety of our nation, these components we are developing are going to be critical to ensure the robustness and success of these systems for catching terrorists and criminals. There are many components we have developed that I see would already benefit the U.S – VISIT program. Another example, is the problems I heard a talk from the captain of the U.S. Coast Guard at IDGA conference just last week. They are catching illegal immigrants who are trying to enter the U.S. via Puerto Rico from the Dominican Republic. They are currently just using a two-finger fingerprint scanner which is relayed via Satellite to the US-VISIT IAFIS to check if the people on the boat crossing have a criminal background and to check if they are repeat offenders, if so, they are charged and arrested. Biometrics already has served as a deterrent and reduced the illegal immigration to Puerto Rico, but there issues and problems that still need to be fixed. For example, salt water is a significant issue and the effect it has on the fingers an the prints imaged, ideally I would like to see Iris deployed in these cases which can be acquired with a lot less co-operation from potential hostile fugitives. There is a great deal of what we do here that can help secure our borders and the national security of our nation.


See all CyLab Chronicles articles