Program Schedule

InterMedia Presentation

Nadia Magnenat-Thalmann (The coordinator of the InterMedia project, MIRALab-University of Geneva, Switzerland)
14:00 ~ 14:30, Thursday (27 May, 2010)

Topic: The InterMedia Project: Towards the truly user-centric multimedia convergence
About Nadia Magnenat-Thalmann
Prof. Nadia Magnenat-Thalmann has pioneered research into Virtual Humans over the last 25 years. She obtained several Bachelor's and Master's degrees in various disciplines (Psychology, Biology and Chemistry) and a PhD in Quantum Physics from the University of Geneva in 1977. From 1977 to 1988, she was a Professor at the University of Montreal where she founded the research lab MIRALab. She was elected Woman of the Year in 1987 in Montreal for her pioneering work on Virtual Marilyn, work that has been shown in the Modern Art Museum in New York in 1988.
She moved to the University of Geneva in 1989, where she founded the Swiss MIRALab, an internationally interdisciplinary lab composed of about 25 researchers. She has received numerous scientific and artistic awards for the films and the work she has directed. More recently, she has been elected to the Swiss Academy of Engineering Sciences, selected as a pioneer of Information Technology at the Heinz Nixdorf Museum’s Electronic Wall of Fame in Germany (www.hnf.de) and has received the CGI’2007 award in Petropolis, Brazil, the SPACE’2007 award in Sofia for the film High Fashion in Equations, film also selected at the electronic theater at SIGGRAPH’2007. This year, together with her PhD students and colleagues from Medical Hospital of Geneva, she was awarded the First Medical Price from Eurographics 2009 . She is presently taking part in more than a dozen of European and National Swiss research projects and is the coordinator of the Network of Excellence (NoE) Intermedia and the coordinator of the Marie Curie European Research training network, 3D Anatomical Human. She has contributed to the publications of more than 450 research papers. She is editor-inchief of the Visual Computer Journal published by Springer Verlag and co-editor-in-chief of the Computer Animation and Virtual Worlds journal published by Wiley. She is also editor of several other journals, among them IEEE Transactions on Multimedia.

Keynote Speech

1. Nassir Navab (Technische Universitat Munchen, Germany)
17:30 ~ 18:30, Thursday (27 May, 2010)
 
Topic: Advance Imaging and Visualization for Computer Assisted Interventions: motivation, state-of-art and future challenges
Abstract
In this talk, I will focus on the problem of design and development of advance imaging and visualization solutions for computer assisted interventions. One major scientific challenge is the recovery and modeling of surgical workflow. The second one is the analysis of large amount of heterogeneous data and their intelligent real-time fusion. The third one is the advanced visualization of such data during focused, high-intensity surgical procedures. In particular, I review the state of art in Medical Augmented Reality, and discuss challenges faced by scientific community in the upcoming years. Throughout this presentation, I use clinical applications and our recent results, obtained in our real-world laboratories within several clinics in Munich, to demonstrate the issues and to provide exemplary paths towards possible solutions. Such examples include real-time Ultrasound/CT registration, Free-Hand SPECT reconstruction, Camera-Augmented Mobile C-arm (CAMC) and HMD based AR for intra-operative visualization and medical training.
About Nassir Navab
Nassir Navab is a full professor and director of the institute for Computer Aided Medical Procedures (CAMP: http://campar.in.tum.de) at Technical University of Munich (TUM) with a secondary faculty appointment at its Medical School. In 2001, while acting as distinguished member of technical staff at Siemens Corporate Research (SCR) in Princeton, he received the prestigious Siemens Inventor of the Year Award for the body of his work in interventional imaging. He had received his PhD from INRIA and University of Paris XI and enjoyed two years postdoctoral fellowship at MIT Media Laboratory before joining SCR in 1994. In November 2006, he was elected as a member of board of directors of MICCAI society. He has been serving on the Steering Committee of the IEEE Symposium on Mixed and Augmented Reality since 2001. He is the author of hundreds of peer reviewed scientific papers and over 40 US and international patents. He is currently serving as Program Chair for MICCAI 2010 and as Area Chair for ECCV and ACCV 2010. He is on the editorial board of many international journals including IEEE TMI, MedIA and Medical Physics. Nassir is also the co-founder and Chief Scientific Officer for SurgicEye (http://www.surgiceye.com). He is proud of his PhD students, who have received many prestigious awards including MICCAI young investigator awards in 2007 and in 2009, best paper awards at IEEE ISMAR 2005, IBM best paper award at VOEC-ICCV 2009, and IPMI Erbsmann award in 2007.

Invited Talks

1. Emmanouela Vogiatzaki Krukowski (University of Peloponnese, Greece)
14:30 ~ 15:15, Thursday (27 May, 2010)

Topic: Multimedia and Human Machine Interfaces in Performance Arts
Abstract
Observing how modern technologies are interpreted by performance artists and how Art applies the technology in the performing arts is fascinating and motivating at the same time. Through examples of some representative artists like Stelarc and Marcel-Li Antunez Roca, we will demonstrate how a human body can be transformed into a machine and how a virtual environment may absorb the performer and integrate the spectator into it. Mixed media performances become installations, places that may host many instruments, one of them being a human body. There is a fusion of organic and inorganic elements on the stage. The post-human, half machine – half body, who is neither male nor female, augments his existence in space not only physically, but also mentally; physically by adding or attaching technical parts (prosthetics) onto his body and mentally by enhancing his illusions through visual effects, projections etc. Cyborg Theatre does not create only new qualities and perspectives in the performance art, but it also creates new audiences. Spectator’s point of view has been in many cases unavoidably identified with the “User’s” point of view. Viewer’s relationship with the character/performer becomes similar to the one between the puppeteer and his marionette. We may even wonder if the future computer games could evolve to become similar to Cyborg performances, where by moving the mouse or your hand you would make a human body respond to your commands.
About Emmanouela Vogiatzaki Krukowski
Emmanouela Vogiatzaki – Krukowski, Scenographer, Visual Artist and theater playwright, holds an MA in Set and Costume Design from Central Saint Martin’s College of Art and Design and an MA in Audio-Visual Production from London Metropolitan University. She pursues her PhD research at Pantios University of Athens in the area of “Modern Technologies and their Impact on the Performing Arts”.  She has been with the Department of Theatre Studies, University of Peloponnese, since 2004. Earlier she has worked as at the BBC News Resources in London for live News-24 broadcasts. She has participated in a number of projects in England, the Netherlands and Greece, including 25 theatre productions and 21 features films and shorts, screened at Curzon Cinema in London, British Film Institute, etc. She has demonstrated her work at two exhibitions of photography and one short movie festival. She has authored two theater plays, various conference publications and has presented a number of invited lectures. She participates frequently as Technical Committee member at prestigious national and international conferences, festivals and other events.

2. Sofia Tsekeridou (Athens Information Technology, Greece), Eri-Giannaka and Menelaos Bakopoulos
09:00 ~ 09:45, Friday (28 May, 2010)
    
Topic: Enriched Media Authoring and Video Annotation on Mobile Devices for Enhanced Communication and Collaboration in Emergency Situations
Abstract
Rich media authoring and annotation has recently been applied in application domains such as crisis and emergency management, in which however the focus is mainly on advanced geographical information systems while their use is generally confined to single users operating within a single agency. The purpose of the talk is to elaborate on the crucial role of enriched media authoring and video annotation services for efficient management of emergency situations, by providing the means for innovative ways of visual communication and collaboration among First Responder (FR) teams and their Command Post (CP). Enriched media and annotation refers to a non-verbal communication aid to assist the FRs in better understanding their environment, the situational context of the emergency and instructions received from the CP. FRs are provided with a ruggedized PDA device, while at the CP site, information merging, visualization and processing of annotated media is performed on powerful computer systems. Due to the critical nature and urgency of emergency situations, the importance of timely responses coupled with the FR limitations in performing complex interactive tasks (e.g. using a keyboard to write text messages) is effectively addressed by the video annotation tools that enable FRs to annotate video using pre-defined templates and overlays in order to simplify and quicken the annotation process. For instance, different rich media templates, including graphic, textual placeholders and predefined textual alerts, are defined for different emergency situations. Rich media templates and associated alert messages, such as: injured person, gas leak and exit, are overlaid on captured images or video content to highlight spatial or spatiotemporal areas and quickly draw the attention and transmit the alert message to the recipient. Information sent to the CP from various sources (PDAs, sensors, location, etc.) is merged and optimally visualized at the CP application to provide the responsible personnel with instant media alerts by the FRs for appropriate reaction and placing them in better control of the happenings at the surroundings of the critical site and of the FR teams around there. It further allows them to instantly communicate using media alerts with the FR members at critical locations to warn them on situations they may not be aware of (enhanced location based situational awareness e.g. gas leak detected in your area).
About Sofia Tsekeridou
Sofia Tsekeridou is an Assistant Professor at Athens Information Technology (AIT) heading the Multimedia, Knowledge and Web Technologies research group. She has participated and coordinated numerous national, industrial and EU-funded research projects in the areas of multimedia processing and analysis, networked media, e-learning, gaming, data mining, information retrieval and knowledge engineering. She has published several papers at international scientific journals and conferences and has contributed to the TV Anytime and W3C standardization bodies. She has served as a reviewer to many international scientific conferences and journals. Dr. Tsekeridou has been the AC Representative of AIT in W3C, a member of the IASTED Technical Committee on The Web, Internet and Multimedia, a member of IEEE and of the Technical Chamber of Greece. Dr. Sofia Tsekeridou has been one of the General Conference Co-chairs of the 3rd ACM International Conference on Digital Interactive Media in Entertainment and Arts (DIMEA 2008), September 2008, AIT, Athens, Greece.
About Eri-Giannaka
Eri Giannaka is a Researcher at the Multimedia, Knowledge and Web Technologies research group of AIT. She has participated and coordinated national and EU-funded research projects in the areas of learning and training, virtual environments and networked media. Her research focuses on Virtual Reality technologies and applications, algorithms and techniques for performance optimization of large-scale distributed systems as well as on methodologies, approaches, tools and services for eLearning and Training environments. Furthermore, part of her research and work is on policies, strategies, technologies and business models for broadband development and growth. To both research directions she has published papers in International Journals and in well-known refereed conferences, articles in Encyclopedias, chapters in books and she is co-author in 1 book and since 2003 she is a reviewer in well know journals and conferences. She has also been teaching at the Department of Telecommunication Systems and Networking and the Department of Applied Informatics in Management & Finance at the Technical University of Messolonghi.

Invited Tutorial

1. Christophe De Vleeschouwer (UCL, Belgium): PRESENTATION FILE
10:15 ~ 12:15, Friday (28 May, 2010)


Abstract
Today’s media consumption evolves towards increased user-centric adaptation of contents, to meet the requirements of users having different expectations in terms of story-telling, and heterogeneous constraints in terms of access devices. Individuals and organizations want to access dedicated contents through a personalized service that is able to provide what they are interested in, at the time when they want it, and through the distribution channel of their choice. In this talk, we explain how it is possible to address this challenge by merging computer vision tools and networking technologies to automate the collection or adaptation of contents, so as to personalize their distribution through interactive services.
From the network perspective, our approach builds on an interactive streaming architecture that supports both user feedback interpretation, and temporal juxtaposition of multiple video bitstreams in a single streaming session. An instance of this architecture has been implemented by extending the liveMedia streaming library and using the H.264/AVC standard. In this framework, the initial video content is split into segments that are encoded independently and potentially with distinct parameters. The server can then decide on the fly which segment to send as a function of how it matches the preferences expressed by the user or the network constraints. 
We explain how these functionalities can be exploited to offer improved viewing experience, when accessing high-resolution or multi-views video content through individual and potentially bandwidth-constrained connections, as typically encountered on mobile networks. Two applicative scenarios are considered. 
In the first one, considered by the Walloon Region WALCOMO project, we split an initial content into non-overlapping segments, and generate multiple cropped or sub-sampled versions for each segment. The user then gets the opportunity to select interactively a preferred version among the multiple streams that are offered to render the scene. To demonstrate our system, automatic methods have been designed and implemented for segmenting and versioning the input video content in a semantically meaningful way, both in surveillance and soccer game contexts. 
In the second scenario, considered by the European FP7 project, the content is captured and produced automatically through a distributed network of cameras. Personalized summaries are then build as a function of individual user preferences, based on the selection and concatenation of the automatically produced and pre-encoded segments. The process involves numerous integrated technologies and methodologies, including but not limited to automatic scene analysis, camera viewpoint selection & control, and generation of summaries through automatic organization of stories. In final, multi-camera autonomous production/summarization can provide practical solutions to a wide range of applications, such as personalized access to local sport events through a web portal or a mobile hand-set, cost-effective and fully automated production of content dedicated to small-audience, e.g. souvenirs DVD, university lectures, conference, etc. 
About Christophe De Vleeschouwer
Christophe De Vleeschouwer is a permanent Research Associate of the Belgian NSF and an Assistant Professor at UCL. He was a senior research engineer with the IMEC Multimedia Information Compression Systems group (1999-2000), and contributed to projects with ERICSSON. He was also a post-doctoral Research Fellow at UC Berkeley (2001-2002) and EPFL (2004). His main interests concern video and image processing for communication and networking applications, including content management and security issues. He is also enthusiastic about non-linear signal expansion techniques, and their use for signal analysis and signal interpretation. He is the co-author of more than 20 journal papers or book chapters, and holds two patents. He serves as an Associate Editor for IEEE Transactions on Multimedia, has been a reviewer for most IEEE Transactions journals related to media and image processing, and has been a member of the (technical) program committee for several conferences, including ICIP, EUSIPCO, ICME, ICASSP, PacketVideo, ECCV, GLOBECOM, and ICC. He is the leading guest editor for the special issue on ‘Multi-camera information processing: acquisition, collaboration, interpretation and production’, for the EURASIP Journal on Image and Video Processing. He contributed to MPEG bodies, and several European projects. He now coordinates the FP7-216023 APIDIS European project (www.apidis.org), and several Walloon region projects, respectively dedicated to video analysis for autonomous content production, and to personalized and interactive mobile video streaming.


[BACK]