Written on Modified on
One of the most practical ways that the growing convergence of AI and IoT, or AIoT, will impact our lives in the near term is in how we interact with the world, from unlocking doors, to entering an office building, to checking in at an airport, to elder and in-patient care.
IoT devices are getting smarter through the integration of AI that obtains insights from data. At the same time, when immediate actions are required from an IoT device, the user experience is determined by how well the AI algorithm works, which is dependent on being trained by large amounts of quality data. Unlike facial recognition, with its vast set of available data, most other applications lack the required data pool to provide effective AI results. So how can we improve this? By combining highly efficient edge AI processing and 3D sensing, we can create endpoint systems that interact intelligently and unobtrusively with people.
In this webinar, we will discuss market trends and the technology demands of real-world deployments, taking a deep dive on smart building systems that combine multiple 3D sensing modalities with image sensors to create vastly improved outcomes for a variety of use cases. We will include straightforward methods for achieving these next-generation systems using reference designs employing the latest RGB-IR image sensors, 3D sensing, and edge AI vision systems on chip (SoCs).
Join Ambarella, Lumentum, and ON Semiconductor to learn how to take full advantage of:
- Structured-light sensing for high resolution 3D biometric identification
- 4K RGB-IR CMOS image sensor technology for camera-based smart building systems
- Occupancy sensing using imaging or 3D sensing
- The combination of edge AI vision processing and 3D sensing for the real world