3. User Interface Introduction

The User Interface (UI) is divided into two different views. Augmented Reality view (AR View) and Map View. The application starts in the Dashboard View, where the screen is divided into two sections so that both AR-View and Map view are shown at same time. Both AR View and Map View can also be viewed in full-screen mode.

AR View shows the real-time video stream from the camera system. On top of the video indicators and bounding boxes for objects that are in the field of view are overlaid. Detections can be selected for more detailed information. AR View also has a setting to use thermal camera video blended with visual camera video so that for example in dark conditions the user can see the other vessels’ lights (from visual camera) and the vessel shape (from thermal camera).

Map view visualizes own vessel location and detected surrounding objects. Tapping each object on the map will open a detailed info card of the selected object. Map can be viewed in North Up or Head Up -orientations. User can also freely move around the map, zoom in and out and always easily return to view where own vessel is at center.

Detections shown on UI are based on either received AIS signals, Camera-based detections or Radar ARPA detections (if in use). All detections are visualized based on the detection source.

Important Note 3: Distance estimation of objects that are detected only by camera may vary due limitations of sensors. Inaccuracies grow with the distance.

Important Note 4: Visual detections are produced based on Machine Learning algorithms which are non-deterministic. There may occur situations when something is not detected by visual detector (False Negative) and also in some cases it can detect something that is not existing (False Positives)

When the same object is detected by multiple sensors, sensor information is fused in order to show only one detection and with higher accuracy than using only one source of information.

Important Note 5: “In some cases detection inaccuracy may limit performance of sensor fusion. This may lead to a) showing one object as two different detections with different sensors or b) showing two separate nearby objects as one fused object.”

Last updated