Email Login
Community
News | Research | DeepLearning | Seminar | Q&A

Call for Papers: 
Immersive Video Coding and Transmission
Manuscript submissions due 16 July.
View instructions here

Scope and purpose
 

Immersive media are gaining in popularity today, and significant efforts are being undertaken in academia and industry to explore its immanent new scientific and technological challenges. There are significant activities in industry and standardization to provide enablers for production, coding, transmission, and consumption of this type of media and the new user experiences it enables. In terms of standardization, the topic has triggered multiple activities in the areas of systems, 3D graphics, audio, image, and video. The technological roadmap foresees an evolution from consumption of the visual media with three degrees of freedom (so called “3DoF”, ability to look around at a fixed viewing position in an observed scene, i.e. 360° video) to 3DoF+ (enabling limited modifications of the viewing position), to different variants of 6DoF (six degrees of freedom, allowing the user not only to look around but also to move around in the observed scene). Different terminology is used in the various communities, referring to immersive or omnidirectional media, virtual reality (VR), or specifically, to 360° video. While the coded representation of audio-visual media for 6DoF is a field of very active research, 3DoF technologies have sufficient maturity to progress towards specification in near-term standards and recommendations. At the video codec level, this includes coding of 2D and 3D virtual reality (VR) / 360° content using the HEVC standard (Rec. ITU-T H.265 | ISO/IEC 23008-2) as the initial step, with new Supplemental Enhancement Information (SEI) messages for omnidirectional video. For audio, as an example, the already published ISO/IEC 23008-3 MPEG-H audio provides all enablers for 3D audio including channel, object and scene-based representations as well as 3D rendering. For storage and delivery, the Omnidirectional Media Format (OMAF) (expected to be published by early 2018 as ISO/IEC 23090-2) provides a set of consistent enablers for download and OTT streaming of 3DoF content. At the same time, immersive still image coding formats are currently developed in JPEG Pleno.

With respect to video, ITU-T VCEG (Q6/16) and ISO/IEC MPEG (JTC 1/SC 29/WG 11) are currently preparing for standardization of video coding technology with a compression capability that significantly exceeds that of the HEVC standard and its current extensions. Such future standardization could take the form of additional extension(s) of HEVC or an entirely new standard. The scope of this joint activity includes consideration of a variety of video sources and video applications, including camera-view content, screen content, consumer generated content, high dynamic range content, and also explicitly virtual reality/360° content. A Joint Call for Proposals on Video Compression with Capability beyond HEVC is going to be published by ITU-T and MPEG in October 2017 with responses to be evaluated in April 2018. The standardization timeline foresees the finalization of the specification by the end of 2020.

On the systems side, and beyond the initial 3DoF activities, the new MPEG project on immersive media (referred to as MPEG-i) attempts to provide a more consistent view on immersive media and support new experiences in the mid and long-term. Based on use cases beyond 3DoF and architectural considerations, audio, video and 3D graphics aspects are evaluated, including new representation formats such as point clouds and light fields. Orthogonal aspects such as improved delivery, consistently reported metrics as well as quality evaluation of immersive media are also in scope.

Also outside of MPEG, the first set of enabling specifications for immersive media with 3DoF are cornerstones for evolving systems and related standardization and interoperability activities. Among others, 3GPP recognizes the value of MPEG 3DoF technologies for the work on 5G VR Streaming that is in progress and expected to be finalized by mid of 2018. The VR Industry Forum promotes the MPEG enablers for full end-to-end operability, combining them with production, security, distribution and rendering centric activities. On the latter, in particular the work in Khronos/OpenXR and W3C WebVR groups target to provide interoperability for platform APIs.

This Special Issue aims at a capture of the status of this emerging technology with respect to latest scientific progress, to corresponding standardization efforts, to subjective assessment of immersive media and also with respect to the impact of this technology on regular users. Original and unpublished research results with topics in any of the following areas or beyond are hereby solicited.

Topics of interest
  • Compression algorithms for immersive video
  • Projection formats for 360° video
  • VR streaming architecture and systems design
  • Standardization in immersive video coding and transmission
  • Over-the-top streaming of 360° video
  • 3D immersive video
  • Mobile architectures and transmission of immersive media for VR applications
  • File formats for 3DoF video
  • QoE assessment of 360° video, images, light fields, and point clouds
  • Beyond 3DoF experiences: 3DoF+, 6 DoF
  • Human Factors
  • End-to-end system aspects of immersive media systems including production, security and rendering
  • Point cloud and light field representation and compression
  • Circuits and systems for immersive video systems
  • Real-time implementation of immersive video systems
  • Implementation challenges (e.g. on VLSI/ASIC, CPU, GPU, FPGA) related to immersive video coding and transmission

Submission procedure

Prospective authors are invited to submit their papers following the instructions provided on the JETCAS web-site: http://ieee-cas.org/pubs/jetcas/submit-manuscript. The submitted manuscripts should not have been previously published nor should they be currently under consideration for publication elsewhere. Note that the relationship to immersive video technologies should be explained clearly in the submission.

Important dates

Manuscript submissions due: 16 July, 2018
First round of reviews completed: 10 September, 2018
Revised manuscripts due: 22 October, 2018
Second round of reviews completed: 12 November, 2018
Final manuscripts due: 26 November, 2018

Request for information

Corresponding Guest Editor: Mathias Wien
RWTH Aachen University, wien@ient.rwth-aachen.de
 

Guest editors

Mathias Wien, RWTH Aachen University, Germany, wien@ient.rwth-aachen.de
Jill Boyce, Intel Corp., USA, jill.boyce@intel.com
Thomas Stockhammer, Qualcomm Incorporated, USA, tsto@qti.qualcomm.com
Wen-Hsiao Peng, National Chiao Tung University, Taiwan, wpeng@cs.nctu.edu.tw
 

번호 제목 글쓴이 날짜 조회 수
138 [CFP] IEEE JSTSP Special Issue on Deep Learning for Multi-modal Intelligence across Speech, Language, Vision, and Heterogeneous Signals GreatMind 2019.08.05 184
137 [CFP] IEEE JSTSP Special Issue on Domain Enriched Learning for Medical Imaging GreatMind 2019.08.02 45
136 [CFP] TMIS Call for Papers: Special Issue on Analytics for Cybersecurity and Privacy GreatMind 2019.06.12 411
135 [CFP] Call for Papers: IEEE GlobalSIP 2019 GreatMind 2019.05.31 37
134 [CFP] IEEE SENSORS 2019- Paper Submission Site Now Open! GreatMind 2019.04.25 104
133 [CFP] ACIVS 2020 - Auckland New Zealand GreatMind 2019.04.16 80
132 [CFP] IEEE TFS - FUZZIEEE 2019 GreatMind 2019.04.05 125
131 [CFP] IEEE TMM Special Issue on Multimedia Computing with Interpretable Machine Learning GreatMind 2019.03.18 173
130 [CFP] IEEE TENCON 2019: Call for Papers and Proposals - Grand Hyatt Bolgatty, Kochi, Kerala, India GreatMind 2019.02.20 369
129 [CFP] 방송공학회논문지 24권 3호 '인공지능 기반의 미디어 처리 및 방송 기술' 논문모집안내 (2019년 5월 발간) GreatMind 2019.02.18 438
128 [CFP] Call for Papers IEEE TMM Special Issue on Multimedia Computing with Interpretable Machine Learning GreatMind 2019.02.15 70
127 [CFP] IEEE ICIP2019...!!!! GreatMind 2018.12.24 248
126 [CFP] A multidisciplinary forum for machine intelligence: Nature Machine Intelligence GreatMind 2018.07.22 297
125 [Journal Issue] Pattern Recognition: Alert 14 July-21 July GreatMind 2018.07.22 3525
124 [CFP] IEEE TFS Call for Papers (Deep Fuzzy Models) GreatMind 2018.06.21 415
123 [CFP] MITA2018, une 28-30, 2018, Shanghai University of Engineering Science, China GreatMind 2018.05.25 457
122 [Journal issue] Journal of Real-Time Image Processing, Vol. 14, Issue 4 GreatMind 2018.05.01 233
121 [CFP] IEEE J-STSP Special Issue on Acoustic Source Localization and Tracking in Dynamic Real-life Scenes GreatMind 2018.04.30 202
120 [CFP] 019 IEEE ICCE - Las Vegas - January 11-13, 2019 GreatMind 2018.04.02 142
» [CFP] JETCAS Call for Papers - Immersive Video Coding and Transmission GreatMind 2018.03.02 507