Your browser is out-of-date!

Update your browser to view this website correctly. Update my browser now

×

Amadeus Acoustics to unveil latest immersive engine software

The new ART::IP immersive performance software module boasts the ability to manage up to 128 sound sources and 256 loudspeakers

Amadeus Acoustics, a leader in transformative audio technology, will unveil its latest immersive engine as part of its ART platform for multi-channel and multi-room applications on booth 7H840.

The new ART::IP immersive performance software module is designed to build on the features of its predecessor with enhanced power, flexibility, and superior sound quality, now supporting a 96 kHz sample rate.

Amadeus Acoustics says this upgraded system delivers immersive sound processing for any number of audience zones while maintaining a consistent spatial experience for every seat. It boasts the ability to manage up to 128 sound sources and 256 loudspeakers.

The system automatically optimises processing for all audience zones, eliminating the need for individual mixes to account for fill speakers or challenging architectural spaces. It is also brand-agnostic, integrating with any loudspeaker system.

One of its standout features, the company says, is the ability to define sources as either immersive sources or performer sources. Immersive sources handle — for example — spatialised sound effects throughout the room, while performer sources are tailored for live instruments, singers, or playback on stage. Performer sources are automatically snapped to the stage area, utilising the full headroom of the main PA system and applying optimised processing across all loudspeakers with automated precedence (Haas effect) optimisation to maintain precise sound localisation.

Furthermore, the loudspeaker configurations can be freely customised to fit the architectural requirements of the space without imposing any restrictions.

Additionally, the system incorporates Amadeus Wavefield Panning, a delay-based panning feature for a truly immersive 3D experience. A unique motion-speed-dependent delay interpolation ensures that even sensitive sources, such as vocalists, can be moved with absolute artefact-free precision.

Other advancements include a fully 3D visual workflow for intuitive setup and monitoring, remote control integration with external platforms such as tracking systems, QLAB and Q-Sys plugins, and configurable downmix capabilities for external feeds.