PyTorch Lecture twelve: RNN1 that Essentials

PyTorch Zero To All Lecture by Sung Kim at HKUST

Video Rating: / 5

19 Antworten auf „PyTorch Lecture twelve: RNN1 that Essentials“

  1. Trying to implement the code as shown here has been confusing with changing code from one slide to the next, why not keep it uniform throughout? Good for understanding the concepts but not if you are copying the code to run it yourself.

  2. Get confused with input shape. In official docs they say that "input of shape (seq_len, batch, input_size)", but in your example seq_len and batch are flipped.

  3. Hey man, thank you so much for all the amazing video lectures.
    Unfortunately, in the first true RNN example (12_2 on github) it throws "dimension specified as 0 but tensor has no dimensions" when computing the loss for me. (loss += criterion(output, label) @ line 78)
    Do you have a solution for this?
    Keep up the amazing work!

  4. Thanks for a very informative and clear video. I however had one question, at 15:38 in the video, is there something I'm missing, or should the outputs of the RNN at e , l , l be [0,0,1,0,0] , [0,0,0,1,0] and [0,0,0,1,0] respectively?

  5. Thank you for the great lecture!
    But I have a question regarding your last part of the code, where you were trying to use who words rather than using them word-by-word.
    I tried that, and I get this error: RuntimeError: log_softmax(): argument 'input' (position 1) must be Variable, not tuple

    It is also logic for me because 'outputs= model(inputs, hidden)' will give tuple value, I wonder to know, did you do some modifications in Class Model file, if so could you please tell me what I should modify?

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.