We are surrounded by millions of sounds that contain important clues about our environment. For instance, if you hear someone screaming, you know that there is an emergency. If you hear a siren, you know an emergency vehicle is approaching. However, while humans are adept at contextualizing sounds, computers are still lagging in this regard. How great would it be if AI could understand sounds as well?
Cochl.Sense allows computers to understand what’s going on around them by enabling them to listen to their surroundings. By simply sending an audio file or audio stream from a microphone, our system will let you know what is happening.
Cochl.Sense is split into different categories :
The following events can be detected when creating an emergency project in the dashboard.
'Fire_smoke_alarm' 'Glassbreak' 'Scream' 'Siren' 'Others'
The following events can be detected when creating a Human Interaction project in the dashboard.
'Clap' 'Finger_snap' 'Knock' 'Whisper' 'Whistling' 'Others'
The following events can be detected when creating a Human Status project in the dashboard.
'Burping' 'Cough' 'Fart' 'Hiccup' 'Laughter' 'Sigh' 'Sneeze' 'Snoring' 'Yawn' 'Others'
The following events can be detected when creating a Home Context project in the dashboard.
'Baby_cry' 'Dining_clink' 'Dog_bark' 'Electric_shaver_or_toothbrush' 'Keyboard_or_mouse' 'Knock' 'Toilet_flush' 'Water_tap_or_liquid_fill' 'Others'