Got ValueError when

Environment info

  • transformers version: 4.4.2
  • Platform: Colab
  • Python version: 3.7
  • PyTorch version (GPU?): 1.8.1+cu101
  • Tensorflow version (GPU?):
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help

@sgugger @LysandreJik

Information

Model I am using (Bert, XLNet …):

The problem arises when using:

  • the official example scripts: (give details below)
  • my own modified scripts: (give details below)

The tasks I am working on is:

  • an official GLUE/SQUaD task: (give the name)
  • my own task or dataset: (give details below)

To reproduce

Hello
I’m trying to using fine-tuning code with my own model, and I got ValueError like below when evaluate with eval_accumulation_steps in TraningArguments and output_hidden_states=True in model config.

If I do output_hidden_states=False(as I know, it is default), the error disappears.
I don’t need output_hidden_states but, I report this because I think it should be work, even when output_hidden_states=True.

I share my colab with bug report with official example of transformers glue example.

Thanks in advance!

ValueError                                Traceback (most recent call last)
<ipython-input-26-f245b31d31e3> in <module>()
----> 1 trainer.evaluate()

/usr/local/lib/python3.7/dist-packages/transformers/trainer_pt_utils.py in _nested_set_tensors(self, storage, arrays)
    392             else:
    393                 storage[self._offsets[i] : self._offsets[i] + slice_len, : arrays.shape[1]] = arrays[
--> 394                     i * slice_len : (i + 1) * slice_len
    395                 ]
    396         return slice_len

ValueError: could not broadcast input array from shape (16,22,768) into shape (16,19,768)

Expected behavior

2 thoughts on “Got ValueError when

  1. I can reproduce and see where this is coming from. The fix is not particularly easy, will try to have something ready by the end of the week.

    Thanks for flagging this and for the nice reproducer!

  2. Ok, the PR mentioned above fixes the problem. Note that for the notebook to run, the compute_metrics function needs to be changed a bit: the predictions will be a tuple and the argmax will fail. Adding the line

    if isinstance(predictions, (tuple, list)):
        predictions = predictions[0]
    

    inside solves that problem.