magic starSummarize by Aili

Stochastic Layer-Wise Shuffle: A Good Practice to Improve Vision Mamba Training

๐ŸŒˆ Abstract

The paper proposes a stochastic layer-wise shuffle regularization (SLWS) to improve the training of vanilla Vision Mamba (Vim) models. The key ideas are:

  • Deeper layers are expected to output higher semantic representations, while shallower layers provide more low-level information. Deeper layers should have higher transformation invariance for patch positions, while shallower layers should maintain positional sensitivity.
  • SLWS introduces randomness to the sequential scanning in Vim by shuffling the input token sequence with a layer-dependent probability. This helps to enhance the transformation invariance and intensify the challenges for visual prediction tasks, thereby mitigating overfitting.
  • SLWS is a plug-and-play algorithm that does not change the model architecture and only introduces lightweight permutation operations, making it efficient.

๐Ÿ™‹ Q&A

[01] Motivation and Intuition

1. What are the key motivations behind the proposed SLWS regularization?

  • The sequential scanning in SSM-based Vim models does not naturally align with the priors of capturing local neighborhood relationships and long-range global correlations in visual data.
  • Deeper layers of a vision encoder are expected to output higher semantic representations, while shallower layers provide more low-level information.
  • Deeper layers need higher transformation invariance for patch positions, while shallower layers should maintain positional sensitivity.
  • Adding disturbance to the basic sequential structure can help overcome the overfitting problem in Vim training.

2. How does the layer-wise probability assignment in SLWS reflect the semantic level prior for model layers? The probability of applying the shuffle regularization is designed to be an increasing function of the layer index. This reflects the assumption that deeper features are expected to be more semantic, and thus should have higher probability of being shuffled to enhance the transformation invariance.

[02] SLWS Algorithm

1. How does the SLWS algorithm work?

  • SLWS introduces a Bernoulli random variable to determine whether to apply the shuffle regularization to the input token sequence of a layer.
  • If the shuffle is applied, the input token sequence is randomly shuffled according to a uniform distribution.
  • The probability of applying the shuffle is assigned in a layer-dependent manner, with deeper layers having a higher probability.

2. What are the key advantages of the SLWS algorithm?

  • It is a plug-and-play regularization that does not change the model architecture and will be omitted during inference.
  • It is simple but effective, only introducing random token permutation operations.
  • The layer-dependent probability assignment is intuitive for enhancing the modeling of non-causal 2D visual data.
  • It only introduces lightweight computational complexity, having negligible impact on training throughput.

[03] Experimental Results

1. How do the ShuffleMamba models perform compared to other backbones?

  • ShuffleMamba-B outperforms the supervised ViT-B by 0.4% on ImageNet-1K classification accuracy.
  • ShuffleMamba-Reg-L2 achieves state-of-the-art performance on ADE20K semantic segmentation and COCO object detection tasks.
  • The large-scale ShuffleMamba models can outperform similar-sized ViTs by 0.8-1.0% on ImageNet-1K classification.

2. What are the key findings from the ablation studies?

  • The layer-dependent probability assignment in SLWS is necessary, as constant probability assignment performs worse.
  • Shuffling the [CLS] token along with the input sequence slightly improves the performance.
  • SLWS has negligible impact on training throughput, causing less than 2% degradation across different input resolutions.

</output_format>

Shared by Daniel Chen ยท
ยฉ 2024 NewMotor Inc.