Today’s conferencing technology looks very different than it did two years ago. Meeting rooms are evolving rapidly to enable “meeting equity” – where in-person and virtual attendees have equal opportunity to communicate, contribute and share ideas. This evolution is being driven by innovations in cameras, software applications, and connectivity infrastructure.
When we meet in person, interactions are focused on the speaker. When the speaker changes, our attention shifts accordingly.
In hybrid meetings with a single camera, single display setup, remote participants watch a zoomed-out video of the entire room on a small display, often forced to guess who is speaking.
AI is increasingly being utilised to make systems more context aware. Remote participants might see a window of the entire room, as well as windows focused on the changing speakers. This ability to shift and zoom in on areas of focus better reflects the experience of an in-person participant, adding an important dimension to meeting equity.
While AI is primarily being implemented at the application level (eg, Zoom’s Smart Gallery), I expect more hardware manufacturers to integrate it into devices earlier in the decision-making chain.
The ability to focus on various meeting participants requires multiple sensors and we are already seeing conference rooms equipped with two cameras and two displays in new installations. We expect this trend to continue with more appliances spread around the room instead of being confined to close proximity of the display. Cameras are also being equipped with multiple sensors to capture a wider field of vision.
Peripherals supported by AI are growing in number as well. For AI to identify the speaker, a conference room must be equipped with audio beamforming capabilities, which require multiple microphones placed around a room for directional signal reception.
Analog to digital
Currently, many conference rooms are equipped with mechanical pan–tilt–zoom (PTZ) cameras capable of remote directional and zoom control, which enable AI to focus on the speaker by mechanically shifting the focus of the camera. With a digital version (ePTZ), the shift of focus happens instantaneously as the camera cuts to areas of interest, zooming in and scaling up images.
HDBaseT is the best performing video distribution technology on the market today, offering the highest resolution (fully uncompressed HDMI2.0 [email protected]:4:4) with zero latency for distances of up to 100 metres (328ft). With the growing use of AI, the increase in quantity and quality of cameras, displays and peripherals, and the shift to ePTZ, HDBaseT is best positioned to form the connectivity infrastructure for the advanced meeting rooms of tomorrow.
It’s not just going to be HDMI extension that will power new industry innovations. For multiple camera, multiple display conference rooms, I believe the industry will transition to USB 3.0 and eventually leverage native extension of Camera Sensor Interface (CSI-2) traffic directly from cameras.
Meeting equity is the key to a meaningful hybrid experience, and these innovations are helping the industry make it a reality.