IVPL - Home
Email Login
IVPL - About

Intelligent Vision Processing Lab.(IVPL in short) is newly starting research group in Sookmyung Women's University, which was from Multimedia Processing Communications Lab in 2009 (MPCL in short). This research group has a young and dynamic research environment in South Korea. It is located in Sookmyung Women's University, IT Engineering department in Rep. of Korea with world class research facility. It has been newly established at 2016 by Prof. Byung-Gyu Kim. The most important fact is that within couple of years we will achieve some good milestones in the field of multimedia and intelligent vision processing. Our group has a leading role in intelligence-driven video and image processing, next generation video coding standard, convergence IT. We have a quite good collaboration with industry and other research labs of IIT Bhubaneswar, Indian Institute of Information Technology (Kalyani), University of Akron, La Trobe University (Melbourne Victoria Australia), University of Sheffield, and University of Macedonia, in the world.

  • Basic Learning Courses

Research Area:
  Our research can be classified into two categories: video coding standard (like HEVC, H.264/AVC, and SHVC etc.) and non-standard video/image processing based on SW technology (like image segmentation and pattern recognition). Brief description of our research area is given below.


  • Video Coding Standards
  • Advanced Algorithms for Various Video Coding Standards
    - H.264/AVC and HEVC related -
    Recently, ISO-IEC/MPEG and ITU-T/VCEG have formed the Joint Collaborative Team on Video Coding (JCT-VC), which aims to develop the next generation video coding standard, called High Efficiency Video Coding (HEVC) and Scalable High Efficiency Video Coding (SHVC). With a flexible coding architecture and respective tool extension from previous standard H.264/AVC and Scalable video coding (SVC), a promising compression performance can be expected. The major goal of the HEVC standard is to achieve significant improvements in coding efficiency compared to H.264/AVC, especially when operating on high resolution video content. Complexity of the HEVC standard is also carefully considered in the development process in order to make it possible to enable high resolution, high quality video applications in resource constrained devices, such as tablets and mobile phones.
    In this aspect, we are investigating some fast algorithms for complexity reduction and quality improvement without any violation of standard including H.264/AVC and SVC.
    - 3D Video Coding Algorithms - Recently, MPEG has developed a suite of international standards to support 3D services and devices, and now initiates a new phase of standardization to be completed within new two years. A new generation of 3D Video Coding (3DVC) technology that goes beyond the capabilities of existing standards to enable both advanced stereoscopic display processing and improved support for auto-stereoscopic multi-view displays is targeted. The primary goal is to define a data format and associated compression technology to enable the high-quality reconstruction of synthesized views for 3D displays. It is recognized that technology for depth estimation and view synthesis, as well as the data format itself, has a significant impact on the reconstruction capability and quality of reconstructed views. From July 2012, MPEG and VECG have made joint group, which is called as JCT-3V for good collaboration.
  • Image/Video processing

  • - Computer Vision Algorithms for ADAS - Advanced Driver Assistance Systems (ADAS) are helping drivers to operate vehicles in a convenient and safe way. While current systems mostly focus on a warning, some systems in the market are already initiating evasive actions. Furthermore on an application level, we focus on vision-based ADAS. So far, many ADAS incorporating cameras heavily rely on a data fusion with other sensor information gathered by mostly active sensory like radar and lidar. The latter are a major cost factor. Improving vision systems helps reducing overall vehicle costs. Human drivers mostly rely on the visual channel while operating a vehicle - improvements in image understanding are expected to lead to a significant better performance of ADAS. Outdoor camera systems have to cope with a highly unstructured and changing environment (e.g. variety of objects and traffic infrastructure, intrinsic ambiguities, whether, daytime), which limits the range of application. While whether and daytime changes, the specific requirements for vision tasks change as well. One of our challenges is to manage and utilize sunlight effects and direct cast vehicle shadows for automotive vision systems.

    • Lane markings : The difficulty introduced by this element is that they may appear in different configurations (single line, disconinuous, narrow, wide, etc.) and that they could be partially or completely occluded by vehicles.

    • Variable road color : the color of the road may be variable, i.e. the pavement could be new (usually darker), old (clearer), or changing.

    - Depth Estimation for 3D Video - Depth estimation or extraction refers to the set of techniques and algorithms aiming to obtain a representation of the spatial structure of a scene. In other terms, to obtain a measure of the distance of, ideally, each point of the seen scene. To achieve good result, many literatures have been reported. In our group, there is a focus-group for developing advanced methods which can give improved performance.



    Total-5 peoples in 2015 (Graduate: 5, Undergraduate: 0)

    • Ph.D. Course: 3 (Full time)
    • M.S. Course: 2 (Full time)
    • Undergraduate Students: 0


    Awards :

    1. Certificate Appreciation Award (by SPIE Optical Engineering)
    2. Special Merit Award for Outstanding paper - IEEE Intr. Conf. on Consumer Electronics (ICCE) 2012, Las Vegas.
    3. Best Paper Award- ETRI 2007.