PURPOSE:
Optoacoustic imaging provides high spatial resolution and the possibility to image specific functional parameters in real-time, therefore positioning itself as a promising modality for various applications. However, despite these advantages, the applicability of real-time optoacoustic imaging is generally limited due to a relatively small field of view.

METHODS:
With this work, we aim at presenting a path towards panoramic optoacoustic tomographic imaging without requiring additional sensors or position trackers. We propose a two-step seamless stitching method for the compounding of multiple datasets acquired with a real-time 3D optoacoustic imaging system within a panoramic scan. The employed workflow is specifically tailored to the image properties and respective challenges.

RESULTS:
A comparison of the presented alignment on in-vivo data shows a mean error of [Formula: see text] compared to ground truth tracking data. The presented compounding scheme integrates the physical resolution of optoacoustic data and hence can provide improved contrast in comparison with other compounding approaches based on addition or averaging.

CONCLUSION:
The proposed method can produce optoacoustic volumes with an enlarged field of view and improved quality compared to current methods in optoacoustic imaging. However, our study also shows challenges for panoramic scans. In this view, we discuss relevant properties, challenges, and opportunities and present an evaluation of the performance of the presented approach with different input data.