Underwater Vision Workshop ICCV 2013

Important Dates

Submission: 15th September
  7th September
Notification: 7th October
Workshop Date:  8th December

Program Committee

Conference Sponsors

Best Paper Award Sponsor

Best Paper Award

We are pleased to announce that NICTA has offered to be the Best Paper Award Sponsor.

And the winner was .... Kawahara et al., A Pixel-wise Varifocal Camera Model for Efficient Forward Projection and Linear Extrinsic Calibration of Underwater Cameras with Flat Housings. Congratulations to the authors!!!

updated: 08 Dec 2013

FINAL Workshop program available

The final workshop program is now available.

updated: 25 Nov 2013

DRAFT Workshop program available

The papers have been reviewed, the authors have submitted their camera-ready version and the workshop program and the invited speaker are now available. We finalize the program soon.

updated: 23 Oct 2013

Submission deadline extended

Due to multiple requests, the deadline for the paper submission has been extended to 15th of September. This is the absolute final date. We hope this give you sufficient time to submit your paper.

updated: 7 Sep 2013

The goal of this workshop is to bring together practitioners from the computer vision and robotics communities with those working on underwater imaging and in targeted application domains to consider the challenges of underwater image processing and to identify opportunities for collaborative research in this area.

There are many challenges associated with processing images captured underwater. Natural scene illumination may be very poor, and there is often little regular structure with which to delineate objects. Additional challenges are introduced by strong wavelength dependent attenuation that limits the effective range of optical imaging in realistic settings to a few meters. This attenuation is the dominant cause of the colour imbalance often visible in underwater images. In practice, there are many cases in which imaging areas of interest underwater requires collecting thousands of images for adequate resolution and quality. In shallow waters, the refraction of sunlight on surface waves and ripples can be problematic, while in deep waters the imaging system needs to carry its own moving light sources resulting in changing illumination in the scene. State of the art camera calibration methods are complex and most practitioners use methods for camera calibration and distortion compensation that do not fully account for refraction of light through the air-viewport-water interface. All of these effects present unique difficulties when working with underwater imagery. Approaches that integrate or couple geometric, radiometric and semantic understanding are likely to be necessary for robust algorithms that will enable a broad range of real-world applications.

Despite these challenges, there have been many recent advances in the processing of imagery from underwater scenes. These advances have implications in a diverse range of application areas, including marine ecology, archaeology, geology as well as industrial and defence applications. This workshop will focus on aspects of underwater-related vision processing, including but not restricted to:

  • physics based vision, hyperspectral imaging, and plenoptics,
  • underwater camera calibration, refractive concerns and lighting/camera configuration,
  • color correction and illumination compensation,
  • multi-view geometry,
  • mapping, localization and SLAM,
  • structure-from-motion and visual odometry,
  • acoustic imaging and processing techniques,
  • segmentation,
  • image classification, object discovery and recognition, scene understanding, and semi-supervised and unsupervised learning for sparsely-labelled or unlabelled data,
  • methods that are robust to heavy-tailed class distributions, and label/annotation inconsistencies.

Submission Format

All submission are to use the standard ICCV format, but a maximum of 6 pages is allowed. All papers will be given double-blind review, and papers can be submitted here. Accepted submissions will be given an oral presentation at the workshop.

Time Event
08:00-08:30 Arrival
08:30-09:00 Introduction to the ACFR and the Marine Robotics Group
09:00-09:40 Keynote Elizabeth Clarke (NOAA): The Collection of Marine Ecosystem and Fisheries Science and the Need for Computer Vision Solutions
09:40-10:20 Keynote David Kriegman (USCD): Computer Vision for Coral Ecology.
10:20-10:40 Oral Presentation: Nourani-Vatani et al., An analysis of monochrome conversions and normalizations on the Local Binary Patterns texture descriptors
10:40-11:00 Morning Tea
11:00-11:20 Oral Presentation: Kawahara et al., A Pixel-wise Varifocal Camera Model for Efficient Forward Projection and Linear Extrinsic Calibration of Underwater Cameras with Flat Housings. Awarded best paper. Sponsored by NICTA.
11:20-11:40 Oral Presentation: Drews-Jr et al., Transmission Estimation in Underwater Single Images
11:40-12:00 Oral Presentation: Hu et al., Categorization of Underwater Habitats Using Dynamic Video Textures
12:00-12:40 Keynote Yoav Schechner (Technion): A View Through the Waves.
12:40-14:00 Lunch + travel to ACFR (Sydney Uni)
14:00-14:30 Lab tour @ the ACFR
14:30-16:00 Discussions @ The Rose Pub
16:00 End of workshop

 

  • Oral presentations are 15min + 5min question time.
  • Keynotes are 30min + 10min question time.

Here are some links to underwater datasets you may like to try out:

  • Tasmania Coral Point Count (ACFR/UTas)

    A data set of 1258 stereo pairs of the benthos captured by an AUV. Each image has geo-tags from a SLAM solution and 50 expert annotations.

  • Underwater Caustics (TECHNION)

    Three data sets of a totally submerged stereo rig strongly affected by natural flickering illumination. The data sets are from the Red Sea, the Mediterranean and from a pool experiment.

  • Virtual Periscope (TECHNION)

    Submerged mono/stereo camera taking videos of the outside world - containing people moving and the background of static backgrounds that are constantly changing- through the wavy water surface.

  • Moorea Labeled Corals (UCSD)

    Contains over 400,000 expert annotations on 2055 coral reef images from the island of Moorea in French Polynesia. Each image has 200 expert random point annotations, indicating the substrate underneath each point.

  • Scott reef 25 (ACFR)

    9800 stereo image pairs captured by the Sirius AUV densely covering an area of 75mx50m. This data set has many small loop closures.

  • Tasmania O'Hara 7 (ACFR)

    11200 stereo image pairs captured by the Sirius AUV traversing a transect of >4km. This data set has few but very long loop closures.