Back to top

In-Car Communication

Easy and relaxed conversations in vehicles
In-Car Communication

Conversations in a vehicle can be eased significantly by an in-car communication system. Talking passengers are recorded by microphones, the speech-signals are enhanced, amplified and played back over loudspeakers to help the listeners.

Our in-car communication system features:
  • Improved conversations with natural speech quality
    • High playback volume due to superior feedback control
    • Reduction of disturbances and driving noise
    • Spatial speaker localization is preserved
  • Support for different car models (sedans, vans, convertibles)
  • Arbitrary configuration of
    • Seats (talking/listening zones)
    • Microphones and loudspeakers
  • Scalability in terms of algorithmic complexity
  • Easy configuration and system setup
In-Car Communication

Details

All sonoware signal processing algorithms are optimized for minimal-delay, low computational complexity and maximum portability across platforms:

Acoustic Feedback Control

Feedback cancellation
Feedback suppression
EQing strategies
Time-variant methods

Signal Enhancement

Microphone combination techniques (beamforming, automatic SNR-based selection)

Noise reduction for stationary and instationary sources

Signal Distribution

Adaptive mixing and zoning
Automatic gain control
Voice-activity-dependent loss control
Noise-dependent gain control
Full-duplex communication

Pre- and Post-Processing

Loudspeaker and microphone equalization
Dynamics: Limiter, compressor, de-esser, etc.
Signal conditioning based on psychoacoustic criteria

Add-Ons

Hands-free integration with echo cancellation and barge-in support

Development of customer-specific algorithms

Inject your own signal processing algorithms anywhere in our signal-chain for your custom demands.

Supported Platforms

ARM (Cortex-A series, Cortex-M series)
DSP (e.g. Analog Devices SHARC)
PC (Windows, Linux, macOS)

Your desired platform is not yet listed? Feel free to contact us and we will find a solution.

Intercom

Easy and relaxed conversations everywhere
Intercom

Our intercom system is the software solution for speech or audio requirements in your individual product - tailored to your needs.

An example of intercom in the operating room

In the operating room hitch-free communication is vitally important. However, there are numerous electrical devices that produce noise or behave as obstacles in the speech paths.

Our intercom system improves this situation by combining microphones, loudspeakers, and optimized algorithms to distribute speech and therefore provides the foundation for more efficient surgeries. The system can be integrated into medical devices or into the furnishing of the operating room.

More use cases of intercom

Intercom is not restricted to medical environments. Generally, it simplifies speech communication in all environments that suffer from high-level background noise or moderately large distances between the interlocutors. Intercom addresses all these challenges and ensures hassle-free communication in environments, such as PA systems, conference rooms, audio/video chat, etc.

Intercom

Details

All sonoware signal processing algorithms are optimized for minimal-delay, low computational complexity and maximum portability across platforms. In order to account for different acoustic conditions in setups with multiple zones, all algorithms are optimized for each recording/playback node indivdually.

Acoustic Feedback Control

Feedback cancellation

Feedback suppression

EQing strategies

Time-variant methods

Signal Enhancement

Microphone combination techniques (beamforming, automatic SNR-based selection)

Noise reduction for stationary and instationary sources

Cancellation of reference signals (text prompts, music, etc.)

Signal Distribution

Adaptive mixing and zoning

Automatic gain control

Voice-activity-dependent loss control

Noise-dependent gain control

Full-duplex communication

Pre- and Post-Processing

Loudspeaker and microphone equalization

Dynamics: Limiter, compressor, de-esser, etc.

Signal conditioning based on psychoacoustic criteria

Add-Ons

Hands-free integration with echo cancellation and barge-in support

Development of customer-specific algorithms

Inject your own signal processing algorithms anywhere in our signal-chain for your custom demands.

Supported Platforms

ARM (Cortex-A series, Cortex-M series)

DSP (e.g. Analog Devices SHARC)

PC (Windows, Linux, macOS)

Your desired platform is not yet listed? Feel free to contact us and we will find a solution.

Hands-free Telephony

Hands-free telephony for a standard or special application?

Our system is an extremely flexible one that can be tailored to the needs of your very specific application. Are you struggling with high background noise, large distance between talkers and microphones, constantly varying conditions? Even want to support multiple talking and listening zones?

We have the solution for you!

Our handsfree system currently comes in flavours optimized for cars, conferencing rooms, and heavy machines (agricultural and construction vehicles). It combines our core algorithms to deliver the best performance under your boundary conditions such as bandwidth, delay or resource restrictions.

Our goal is to integrate seamlessly into existing applications and exploit algorithmic synergies whenever possible. As an example, our in-car communication system features an integrated hands-free system with a combined feedback and echo canceller. This saves algorithmic complexity (no duplication of functionality) and even improves the cancellation performance in both applications.

A selection of featured sub-algorithms:

Speech Recognition

Detect words. Automate. Innovate. Combine with beamforming for even better detection.

We have integrated 3rd-party speech recognition engines into our real-time-audio suite, with support for custom wakeup-words, vocabulary and rules. You can easily react to any speech recognition related event in your code using sonoware’s API. We offer on-device or server-based speech recognition solutions.

React on-device …

int handler(struct SonoEventSubscription *self, void* data, int code){
    // React to the recognition event
    fade_out_music_playback();
}

// Subscribe to the result changes in our real-time-audio suite
struct SonoEventSubscription *sr_result =
    sonoEventSubscribeByPath(
      sonoContext,
      "~/core/srResult/result",
      &handler,
      NULL
    );

This code can run alongside our real-time-audio suite in C. It uses the context-handle to our real-time-audio suite and allows you to react to any change in real-time on your processing device.

In this imaginary example it fades out the music playback.

int handler(struct SonoEventSubscription *self, void* data, int code){
    // React to the recognition event
    fade_out_music_playback();
}

// Subscribe to the result changes in our real-time-audio suite
struct SonoEventSubscription *sr_result =
    sonoEventSubscribeByPath(
      sonoContext,
      "~/core/srResult/result",
      &handler,
      NULL
    );

… or react from anywhere

# Connect to device running sonoware real-time-audio suite
com = Communicator()
com.connect("10.42.8.21")
sr_result = com.register("~/core/srResult/result")
# Define handler that will turn on LEDs based on the wakeup-word
def turn_on_leds_on_wakeup_word(updated_model):
    if updated_model.value == 'hey sono':
        turn_on_leds()
# Register that handler to be called each time there is a
# new speech-recognition result
sr_result.listen_on_change(turn_on_leds_on_wakeup_word)

This code runs on low-power embedded chips running micropython. Here we connect remotely to a sonoware real-time-audio suite and can react to any change in real-time.

In this example it toggles some LEDs, when the wakeup-word of the speech-recognition was uttered.

# Connect to device running sonoware real-time-audio suite
com = Communicator()
com.connect("10.42.8.21")
sr_result = com.register("~/core/srResult/result")
# Define handler that will turn on LEDs based on the wakeup-word
def turn_on_leds_on_wakeup_word(updated_model):
    if updated_model.value == 'hey sono':
        turn_on_leds()
# Register that handler to be called each time there is a
# new speech-recognition result
sr_result.listen_on_change(turn_on_leds_on_wakeup_word)

Siren Detection

Detect acoustic alarm signals using Machine Learning approaches.

What’s it about?

The automatic detection of acoustic alarm signals of ambulances, police cars, or fire trucks is a vital element for improving our security in traffic. Human drivers can be warned early and autonomous vehicles also like to know what is happening around them. Subsequently emergency forces can reach their destination faster and safer.

What’s our solution?

We detect siren sounds automatically with a scalable approach depending on the needs of the application: From slim algorithms based on statistical models requiring only minimum processing resources up to large-scale Machine Learning solutions if accuracy matters.

Check out our online demos

You want to see our solution in action? No problem, we have you covered! Just hit the button and check out our online demos that showcase the performance of our algorithm in various conditions.

Our offer


Versatile

Support for different international siren sounds such as howling, wailing, Martinshorn.

Robust

Handles difficult scenarios such as Doppler shift, reverberation, and multiple sirens.

Flexible

Microphone placement according to your specifications.

Direction

Estimate further event data, e.g. direction of the siren or relative speed.

Predictive Maintenance

Detect emerging defects in mechanical components with acoustic and vibroacoustic sensors and Machine Learning approaches.

What’s the problem

Products with mechanical components are prone to wear, exterior influences, or errors in components or during assembly.

What’s our solution?

By using data from various sensors (microphones, accelerometers, e.g.) and latest Machine Learning algorithms even the smallest deviations in the operating state of a component can be detected. Continuously, periodically or just during production of your products - it’s up to you!

Your Advantage

By condition based monitoring and predictive maintenance you are able to:

  • Optimize maintenance intervals
  • Support your QA team with new tools
  • Reduce cost
  • Gain a fast and easy way for fault detection and troubleshooting

Our offer


Analysis and Consultation

Support for different international siren sounds such as howling, wailing, Martinshorn.

Support for Data Acquisition

What data is required for the training of the specialized models for recognition?

Design and Development of a Model

We develop, train and test the Machine Learning models to fit the task.

Series Development

Adjustments of the recognition models to the resources available.