magic starSummarize by Aili

An Investigation of Incorporating Mamba for Speech Enhancement

๐ŸŒˆ Abstract

This work aims to study a scalable state-space model (SSM), Mamba, for the speech enhancement (SE) task. The key points are:

  • Exploit a Mamba-based regression model to characterize speech signals and build an SE system upon Mamba, termed SEMamba
  • Explore the properties of Mamba by integrating it as the core model in both basic and advanced SE systems
  • Utilize signal-level distances as well as metric-oriented loss functions
  • SEMamba demonstrates promising results and attains a PESQ score of 3.55 on the VoiceBank-DEMAND dataset
  • When combined with the perceptual contrast stretching technique, the proposed SEMamba yields a new state-of-the-art PESQ score of 3.69

๐Ÿ™‹ Q&A

[01] Mamba: Linear-Time Sequence Modeling with Selective State Spaces

1. What are the key enhancements introduced by Mamba compared to the traditional structured state-space model (SSM)?

  • Mamba incorporates an input-dependent selection mechanism, enabling efficient information filtering from inputs by parameterizing the SSM parameters based on the input data.
  • Mamba introduces a hardware-aware algorithm that scales linearly with input sequence length, facilitating faster computation of the model recurrently with a scan.
  • The Mamba architecture, which integrates SSM blocks with linear layers, is simpler and has demonstrated state-of-the-art performance across various long-sequence patterns, including language and genomics, highlighting significant computational efficiency during both training and inference phases.

2. How does the Mamba model operate? The Mamba model operates by mapping an input x to an output y through a higher dimensional latent state h, as follows: h = Ah + Bx y = Ch + Dx where A, B, C, and D represent discretized state matrices.

[02] Mamba in Speech Enhancement

1. What are the two types of SEMamba solutions implemented in the paper?

  1. SEMamba-basic: Integrates Mamba with a basic speech enhancement (SE) model architecture.
  2. SEMamba-advanced: Follows the advanced MP-SENet architecture but uses the Mamba block in place of attention-based methods.

2. What are the additional design choices explored for the SEMamba-advanced model?

  1. Transitioning from a uni-directional to a bi-directional Mamba block.
  2. Incorporating a consistency loss to minimize the gap between the complex spectrum obtained directly from the model output and the complex spectrum derived after applying inverse STFT and then re-applying STFT.
  3. Applying perceptual contrast stretching (PCS) as an auxiliary step following the enhancement process to refine the perceptual quality of the speech signal.

3. How do the results of SEMamba compare to other state-of-the-art SE methods?

  • SEMamba (-CL) yields performance comparable to MP-SENet, which uses Conformer as the core model.
  • SEMamba provides a high PESQ score of 3.55 when combining consistency loss on the VoiceBank-DEMAND dataset.
  • By additionally applying PCS, SEMamba can further improve the PESQ score to 3.69, representing the state-of-the-art results on this dataset.
Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.