Iterative model-based algorithms are known to enable more accurate and quantitative optoacoustic (photoacoustic) tomographic reconstructions than standard back-projection methods. However, three-dimensional (3D) model-based inversion is often hampered by high computational complexity and memory overhead. Parallel implementations on a graphics processing unit (GPU) have been shown to efficiently reduce the memory requirements by on-the-fly calculation of the actions of the optoacoustic model matrix, but the high complexity still makes these approaches impractical for large 3D optoacoustic datasets. Herein, we show that the computational complexity of 3D model-based iterative inversion can be significantly reduced by splitting the model matrix into two parts: one maximally sparse matrix containing only one entry per voxel-transducer pair and a second matrix corresponding to cyclic convolution. We further suggest reconstructing the images by multiplying the transpose of the model matrix calculated in this manner with the acquired signals, which is equivalent to using a very large regularization parameter in the iterative inversion method. The performance of these two approaches is compared to that of standard back-projection and a recently introduced GPU-based model-based method using datasets from in vivo experiments. The reconstruction time was accelerated by approximately an order of magnitude with the new iterative method, while multiplication with the transpose of the matrix is shown to be as fast as standard back-projection.