Skip to content

Speech2TextForConditionalGeneration broken in transformers 4.51.x #37874

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
1 of 4 tasks
aaron-siegel opened this issue Apr 29, 2025 · 3 comments · Fixed by #37931
Closed
1 of 4 tasks

Speech2TextForConditionalGeneration broken in transformers 4.51.x #37874

aaron-siegel opened this issue Apr 29, 2025 · 3 comments · Fixed by #37931
Labels

Comments

@aaron-siegel
Copy link

aaron-siegel commented Apr 29, 2025

System Info

  • transformers version: 4.51.3
  • Platform: macOS-15.3.1-arm64-arm-64bit
  • Python version: 3.12.9
  • Huggingface_hub version: 0.30.2
  • Safetensors version: 0.5.3
  • Accelerate version: not installed
  • Accelerate config: not found
  • DeepSpeed version: not installed
  • PyTorch version (GPU?): 2.7.0 (False)
  • Tensorflow version (GPU?): not installed (NA)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using distributed or parallel set-up in script?:

Who can help?

Running the Speech2Text example on transformers 4.51.x gives either nonsense output or no output. The code I'm running is taken verbatim from https://huggingface.co/docs/transformers/en/model_doc/speech_to_text

import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset

model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")


ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")

inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])

transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcription

On transformers 4.50.3 it gives the expected output:

['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']

On transformers 4.51.x it gives either no output or nonsense output:

With Python 3.12 & transformers 4.51.3:

['that man man man man man man man man man man man man turn turn turn turn turn turn turn turn turn thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin thin']

With Python 3.9 & transformers 4.51.3:

['']

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

conda create --name temp python=3.12
conda activate temp
pip install torch torchaudio soundfile librosa datasets transformers sentencepiece

import torch
from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration
from datasets import load_dataset

model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-librispeech-asr")
processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-librispeech-asr")


ds = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")

inputs = processor(ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], return_tensors="pt")
generated_ids = model.generate(inputs["input_features"], attention_mask=inputs["attention_mask"])

transcription = processor.batch_decode(generated_ids, skip_special_tokens=True)
transcription

Expected behavior

['mister quilter is the apostle of the middle classes and we are glad to welcome his gospel']
@Rocketknight1
Copy link
Member

Rocketknight1 commented Apr 30, 2025

I can reproduce the issue - cc @gante for generation and @eustlb for audio models. I noticed a missing weights error on load:

Some weights of Speech2TextForConditionalGeneration were not initialized from the model checkpoint at facebook/s2t-small-librispeech-asr and are newly initialized: ['model.decoder.embed_positions.weights', 'model.encoder.embed_positions.weights']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference

however, this error also occurs on 4.50, even though generation is correct there, so I don't think it's relevant.

@gante
Copy link
Member

gante commented May 2, 2025

@aaron-siegel thank you for the clear reproducer 💛

It seems like #36963 [updated weight init] is the first bad commit, so it may indeed be related to the warning @Rocketknight1 👀 Having a look

@gante
Copy link
Member

gante commented May 2, 2025

^ that PR fixes it 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants