Robots need to see, hear, speak, and express themselves as naturally as humans can. Furhat is designed for human social interactions using a multimodal system with discrete modular subsystems that handle functions such as facial animation, neck motion, visual perception, audio processing, cloud service integration and other operations that allow it to interact with humans just as we interact with each other.
One of Furhat’s very unique features is its simple and powerful back-projection technology. The projection is cast onto replaceable polymer masks that give Furhat it's human-like appearance.The combination of facial animation and swapable masks make it very easy to create expressive robot characters with any ethnicity, age and gender.
Furhat has a wide field of view camera integrated with computer vision to track multiple users in real-time, do facial expression analysis, estimate head pose and user distance. This includes a highly accurate face detection system based on state-of-the-art deep learning (single-shot-detector) model, depth estimation, and spatial modelling.
Furhat is designed for natural conversations with rapid turn-taking. You have detailed controls for managing initiative, turn-taking, interruptions, error handling and priming the speech recogniser for expected utterances. You can also fully automate conversations and design interactions with our LLM-driven FurhatAI tools.
Build interactions in Kotlin or use other programming languages with the Remote API. Furhat comes with a powerful set of programming tools for researchers, educators, developers, and students.
Explore SDKDon't have any prior experience with coding? You can also use our LLM-driven conversation designer to rapidly ideate, create, and test interactions through prompting.
Explore Creator