Contrast Detection Probability - Implementation and use cases
Uwe Artmann, Image Engineering GmbH & Co KG; Kerpen, Germany
Marc Geese, Robert Bosch GmbH, Leonberg, Germany
Max G
¨
ade, Image Engineering GmbH & Co KG; Kerpen, Germany
Abstract
The automotive industry formed the initiative IEEE-P2020
to jointly work on key performance indicators (KPIs) that can be
used to predict how well a camera system suits the use cases. A
very fundamental application of cameras is to detect object con-
trasts for object recognition or stereo vision object matching. The
most important KPI the group is working on is the contrast detec-
tion probability (CDP), a metric that describes the performance
of components and systems and is independent from any assump-
tions about the camera model or other properties. While the the-
ory behind CDP is already well established, we present actual
measurement results and the implementation for camera tests. We
also show how CDP can be used to improve low light sensitivity
and dynamic range measurements.
Introduction
The idea of Contrast Detection Probability (CDP) was first
presented by Geese et.al.[1] in 2018. It was derived from the need
to have a KPI that is independent from the system under test and
also independent from which components are tested. So the same
KPI shall be applied to describe the performance of a windshield
or a lens.
As shown in the examples in Figure 1, the cause for loss
of contrast can be manifold and is not only related to the cam-
era system itself. CDP was designed to describe the performance
of a camera system to reproduce contrasts, the core functionality
needed for advanced algorithms in machine vision.
EXAMPLES FOR LOW OBJECT CONTRAST
Contrast reduction on the input – fog in the scene or dust on the windshield
Figure 1. Different aspects in imaging that can lead to a contrast loss on
the input side of a camera. In these examples this is fog or pollen dust on a
windshield.
Another important new aspect in automotive imaging is High
Dynamic Range and the impact on system performance. As de-
scribed in the IEEE-P2020 white paper [2] and shown in Figure 2,
the HDR rendering process can lead to so called SNR drops. This
is an effect from combining e.g. the dark part of one image with
the bright part of another. The resulting SNR curve will show
drops somewhere between the maximum and minimum light in-
tensity. An example is shown in figure 3. The SNR drop can be
observed in the SNR curve, a plot of the SNR vs. the light in-
tensity. The open question is, how much impact does this have
on the system performance. Even though the SNR value is a well
established metric, it is very hard to derive precise system perfor-
mance predictions from the SNR value. The CDP value has this
possibility, as it is directly related to the system performance.
12
Copyright © 2018 IEEE. All rights reserved.
HDR—High dynamic range imagers are often combined with local tone mapping image processing. This
creates challenges of texture and local contrast preservation, color fidelity/stability, SNR stability (see
Figure 8
), and motion artefacts.
Multi-cam
—In applications such as SVS, image capture originating from multiple cameras with
overlapping field of views are combined or “stitched” together. The created virtual image evaluation is
problematic due to the individual characteristics of each camera and captured portion of the scene, i.e.,
different field
s of view, local processing, different and mixed camera illumination.
Distributed—Distributed systems with some local image
processing close to the imager and some ECU
centralized processing. Local processing (e.g., tone mapping)
does not preserve the original information at
the camera and is
therefore not invertible to be post recovered in the central ECU (e.g., glossy
compression/quantization).
Dual purpose
—The same camera feed may have to serve both for viewing and computer vision needs.
Extrinsic components—System level image quality is affected by additional components of the vehicle
(lights, windshield, protection cover window, etc.).
Video
—Automotive systems use video imagery. Many of current imaging standards, however, were
originally targeted for still image application and typically do not cover motion video image quality.
Illumination—The huge variety of the scene illumination in automotive use cases imposes additional
challenges for testing (e.g., xenon light, d65 light, sunlight, various LED street lamps).
Another issue is that the existing standards do not necessarily cover the
specific challenges that occur in
uncontrolled use environments, in which automotive camera applications need to operate.
Figure 8
shows a typical SNR versus illumination curve of camera using a multi-exposure type of HDR operation.
When
a high dynamic range scene (e.g., tunnel entrance/exit) is captured, a counterintuitive phenomenon may
occur
in regions of the image above the intermediate SNR drop point. Brighter regions above those drops will
exhibit higher noise than regions with a lower brightness. This means that there is more noise in the intermediate
bright regions than in the dark ones. In the case where an application requires a certain minimum level of SNR,
these intermediate drops become an issue because existing standards on HDR do not consider such intermediate
SNR drops. Figure 8
illustrates an example SNR curve of a sensor operated in an optimized configuration to achieve
improved SNR at these drop points. This consequently leads to reduced dynamic range from 144 dB down to
120
dB, according to operation adjustment required to achieve an improved overall SNR level.
Figure 8 SNR vs. illumination for multi exposure HDR imagers
Figure 2. A typical SNR curve for HDR rendering. Depending on different
sensor settings, more or less significant SNR drops occur. Source: IEEE-
P2020
Figure 3. An example image (detail) of the effects also known as SNR drop.
The noise is more dominant in the bright parts of the image compared to the
dark parts. The bright part is rendered from another image than the dark
part.
Concept of Contrast Detection Probability
A single CDP value describes the probability that two sys-
tem outputs (e.g. two digital values) create a contrast from two
system inputs (e.g. two luminance inputs). It is calculated from
two random variables (here A and B). These random variables
IS&T International Symposium on Electronic Imaging 2019
Autonomous Vehicles and Machines Conference 2019 030-1
https://doi.org/10.2352/ISSN.2470-1173.2019.15.AVM-030
© 2019, Society for Imaging Science and Technology