®Kärnten Werbung / Universität Klagenfurt

 
May 7-9, 2008, Klagenfurt, Austria  
  

 
 
Invited Talks

WIAMIS 2008 is pleased to announce the following keynote speakers with tentative titles:

Keynote Speakers

Horst Bischof received his M.S. and Ph.D. degree in computer science from the Vienna University of Technology in 1990 and 1993, respectively. In 1998 he got his Habilitation (venia docendi) for applied computer science. Currently he is Professor at the Institute for Computer Graphics and Vision at the Technical University Graz, Austria. H. Bischof is Senior researcher at the K+ Competence Center "Advanced Computer Vision" where he is responsible for research projects in the area of classification. H. Bischof is member of the scientific board of the K+ centers VrVis and KNOW. His research interests include object recognition, visual learning, medical computer vision, neural networks, adaptive methods for computer vision where he has published more than 330 scientific papers.

Horst Bischof was co-chairman of international conferences (ICANN, DAGM), and local organizer for ICPR'96. He was program co-chair of ECCV2006 and Area chair of CVPR 2007, ECCV2008. Currently he is Associate Editor for Pattern Recognition, Computer and Informatics and Journal of Universal Computer Science.

Horst Bischof has received the 29th Pattern Recognition award in 2002; the main price of the German Association for Pattern Recognition DAGM in 2007 and the Best scientific paper award at the BMCV 2007.

Robust Person Detection for Surveillance using Online Learning

Recently, there has been considerable amount of research in methods for person detection. This talk will focus on methods for person detection in a surveillance setting (known environment). We will demonstrate that in this setting one can build robust and highly reliable person detectors by using on-line learning methods. In particular, I will first discuss "conservative learning"
which is able to learn a person detector without any hand labelling effort. As a second example I will discuss a recently developed grid based person detector.

The basic idea is to considerably simplify the detection problem by considering individual image locations separately. Therefore, we can use simple adaptive classifiers which are trained on-line. Due to the reduced complexity we can use a simple update strategy that requires only a few positive samples and is stable by design. This is an essential property for real world applications which require operation for 24 hours a day, 7 days a week. During the talk we will illustrate our results on video sequences and standard benchmark databases.

John R. Smith is Senior Manager of the Intelligent Information Management Department at IBM T. J. Watson Research Center. He leads a research department addressing technical challenges in database systems and information management. His team includes the Database Research Group and Intelligent Information Analysis Group. In addition to his managerial responsibilities, Dr. Smith currently serves as Chair, Data Management research area at Watson and IBM Research Campus Relationship Manager for Columbia University.

From 2001-2004, Dr. Smith served as Chair of ISO/IEC JTC1/SC29 WG11 Moving Picture Experts Group (MPEG) Multimedia Description Schemes group with responsibilities in the development of MPEG-7 and MPEG-21 standards. Dr. Smith also served as co-Project Editor for following parts of MPEG-7 standard: "MPEG-7 Multimedia Description Schemes," "MPEG-7 Conformance," "MPEG-7 Extraction and Use" and "MPEG-7 Schema Definition." He currently leads IBM’s multimedia analysis and retrieval research area.  Dr. Smith is IEEE Fellow.

Unleashing Video Search

Video is rapidly becoming a regular part of our digital lives. However, its tremendous growth is increasing users’ expectations that video will be as easy to search as text. Unfortunately, users are still finding it difficult to find relevant content. And today’s solutions are not keeping pace on problems ranging from video search to content classification to automatic filtering. In this talk we describe recent techniques that leverage the computer’s ability to effectively analyze visual features of video and apply statistical machine learning techniques to classify video scenes automatically. We examine related efforts on the modeling of large video semantic spaces and review public evaluations such as TRECVID, which are greatly facilitating research and development on video retrieval. We discuss the role of MPEG-7 as a way to store metadata generated for video in a fully standards-based searchable representation. Overall, we show how these approaches together go a long way to truly unleash video search.


Jens-Rainer Ohm Jens-Rainer Ohm received the Dipl.-Ing. degree in 1985, the Dr.-Ing. degree in 1990, and the habil. degree in 1997, all from Technical University of Berlin (TUB), Germany. From 1985 to 1990, he was a research and teaching assistant with the Institute of Telecommunications at TUB.

From 1990 to 1995, he performed work within research projects on image and video coding at the same location. Between 1992 and 2000, he has also served as lecturer on topics of digital image processing, coding and transmission at TUB. From 1996 to 2000, he was project manager/coordinator at the Image Processing Department of Heinrich Hertz Institute (HHI) in Berlin. He was involved in research projects on motion-compensated, stereoscopic and 3-D image processing, image/video coding and content description for image/video database retrieval. Since 1998, he participates in the work of the Moving Pictures Experts Group (MPEG), where he has been active in the development of MPEG-4 and MPEG-7 standards.

In 2000, he was appointed full professor and since then holds the chair position of the Institute of Communication Engineering at RWTH Aachen University, Germany. His present research and teaching activities are in the areas of multimedia communication, multimedia signal processing/coding and services for mobile networks, with emphasis on video signals, also including fundamentals of digital communication systems.

He has served as a chair of the MPEG Video Subgroup since May 2002. Since January 2005, he is also co-chairing the Joint Video Team (JVT). Prof. Ohm has authored textbooks on multimedia signal processing, analysis and coding, on communications engineering and signal transmission, as well as numerous papers in the various fields mentioned above.

Recent, current and future developments in video coding

Most recent attention in development of video coding algorithms has been devoted to the ITU-T Rec.H.264 | ISO/IEC 14496-10 Advanced Video Coding standard. Recent and current extensions to this standard include developments for professional applications, highly-efficient scalable video coding and multi-view video coding. Finally, digital video over various networks, going for higher and higher resolutions, is becoming reality.

While this technology is progressing and further optimizations are sought, new challenges appear at the horizon. New types of displays include 3D capabilities, requiring for generation of additional view perspectives beyond available camera positions. Cameras and displays are coming up with permanently increasing frame rates and sizes. The tremendous amount of different applications for digital video requires additional flexibility and reconfigurability of devices. And last not least, increased compression efficiency (meaning rate reduction versus processing cost) is again becoming more important with ever increasing numbers of pixels to be transmitted. The talk will focus on possible solutions to these challenges and discuss the maturity they currently have.