Kria KV260 Basic Accessory Pack
Power supply & adapter, Ethernet cable, USB cable accessories, and DP or HDMI™ monitor.
Event Vision-Based (EVB) sensing offers accelerated sensing speeds, improved operation in unpredictable lighting conditions, and decreased communication demands compared to frame-based sensors.¹ This application showcases an integrated EVB sensor via MIPI to the AMD Kria™ KV260 Vision AI Starter Kit, executing object detection and tracking for streamlined, end-to-end pipelined acceleration. The application includes a GitHub reference design, empowering customers to customize it to suit their specific needs.
This Kria KV260 Vision AI Starter Kit application provides easy bring-up steps for event camera interfacing and ML inferencing on event data streams. It also has a GitHub page that allows you to modify the pipeline to create your own customized applications.
Each pixel reports when it detects a relative change in the illumination intensity that is above or below a defined percentage of the previous intensity. Instead of capturing images at a fixed time interval, the event camera senses the changes in brightness at a per-pixel level asynchronously and creates event data streams. Learn more about how event vision works here.
Event camera datasets are provided by several organizations. Visit the PROPHESEE dataset page, or go to the DSEC dataset or the eTraM dataset pages. If you have an event camera, you can capture and create a custom dataset based on its data streams.
If the event camera is available with a MIPI interface, then the Kria SOM SmartCam design with Raspberry Pi example can provide general guidance on creating AMD Vivado and AMD Vitis designs. Since the event camera generates event data differently from an image-based frame camera, the Vivado IP pipeline will consist of a MIPI-DMA-PS (DDR) pipeline but does not require demosaic, gamma, and color correction IP cores in between. For a USB-based event camera, you can set up the necessary Linux® driver for the USB-based event camera and start capturing and working with the camera. Here is a GitHub link for an application design. For further customization, please click the Contact LogicTronix button above.
An event camera is triggered by an event, so unlike an RGB image camera, it generates data streams when there is movement at the focused or capturing region. Therefore, overall processing bandwidth requirements will be lower than the RGB data and camera. An event camera can also perform at higher speeds than a traditional image camera.1 More information about the advantages of event-based cameras and vision is available here.
This tutorial showcases running event-based ML training and inferencing that is similar to RGB image-based cameras. You can also visit the PROPHESEE Yolo tutorial page for training/inferencing of a Yolo network for event-based data. Other tutorials for training and inferencing ML algorithms or networks on event camera data are also available.
Certain AMD technologies may require third-party enablement or activation. Supported features may vary by operating system. Please confirm with system manufacturer for specific features. No technology or product can be completely secure.