A mental health startup has shut down and open-sourced its technology. For seven years, California-based Kintsugi developed AI to detect depression and anxiety from speech. Failing to secure FDA clearance, the company has released most of its technology to the public, with potential applications beyond healthcare, such as detecting deepfake audio.
Mental health assessments often rely on patient questionnaires rather than lab tests. Kintsugi’s software analyzed speech patterns to identify indicators of mental health issues. Although not detailed publicly, the AI detected subtle shifts in speech. Research showed its effectiveness matched established screening tools for depression.
Kintsugi aimed to complement tools like the PHQ-9, offering a more objective signal and expanding screenings. However, FDA clearance was required for deployment. The company sought approval through the De Novo pathway, but regulatory challenges slowed the process. Founder Grace Chang noted the framework was not designed for AI’s evolving nature.
Despite regulatory efforts, Kintsugi faced funding difficulties and decided to open-source its technology. Misuse concerns arose, as depression screening tools could be used by employers or insurers without traditional safeguards. Open-source releases often lack regulatory documentation, complicating FDA approval for derived products.
Chang acknowledged misuse concerns but saw underuse as the greater risk. Not all technology was publicly released, with security measures for synthetic voice detection withheld. This emerged from efforts to enhance mental health models but revealed a potential to distinguish between human and AI-generated voices, an area outside FDA oversight.
Chang hopes others continue Kintsugi’s work to fulfill FDA requirements. The shutdown highlights the clash between startup timelines and medical regulation, but Chang encourages continued innovation.
