The 360 Video Challenge
360-degree videos are currently recorded using anywhere from 2 to 32 individual (up to 4K resolution) cameras, and the more cameras used, the higher the resolution, therefore, resulting in a higher quality final 360-degree experience.
However, combining the output of multiple camera views into a single seamless equirectangular projection is incredibly complex. The output of all these cameras must be stitched together in software before it can be edited or viewed, and the more cameras used, the more seams that need to be stitched.
In addition, lens distortion must be corrected, parallax must be considered, and the exposure differences between each camera’s output has to be blended to produce the final balanced image. Because of this, combining the video output from professional-grade 360-degree camera rigs can require hours of post-processing and substantial compute resources.
The 360 Video Revolution
Radeon Loom revolutionizes the 360-degree video stitching process by addressing its formidable challenges through massively parallel GPU processing to enable both real-time live stitching and fast offline stitching of 360 videos. Radeon Loom:
- Utilizes AMD’s open source implementation of the Khronos OpenVX™ computer vision framework*.
- Performs highly optimized GPU-accelerated decoding, lens correction, exposure compensation, seam finding, seam blending and merging, and encoding.
- Stitches input from up to 24 cameras to 4k x 2k stitched output in real-time.
- Stitches input from up to 63 cameras to 8k x 4k offline with 5 overlapping cameras at each pixel.
- Offline stitching uses hardware or software codecs.
- Supports virtual camera overlays, underlays, watermarking, and chroma keying.
Radeon Loom SDK
The Radeon Loom SDK is expected to be available to 360 video application developers, hardware manufacturers, and VR content creators. The Beta Preview of the Radeon Loom Stitching Library is now available on GPUOpen.com.