Error 405 (Bad Request)!!1 body{background:url(//www.google.com/images/errors/robot.png) 100% 5px no-repeat;padding-right:205px}p{margin:11px 0 22px;overflow:hidden}pre{white-space:pre-wrap;}ins{color:#777;text-decoration:none}a img{border:0}@media screen and (max-width:772px){body{background:none;margin-top:0;max-width:none;padding-right:0}}#logo{background:url(//www.google.com/images/branding/googlelogo/1x/googlelogo_color_150x54dp.png) no-repeat;margin-left:-5px}@media only screen and (min-resolution:192dpi){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat 0% 0%/100% 100%;-moz-border-image:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) 0}}@media only screen and (-webkit-min-device-pixel-ratio:2){#logo{background:url(//www.google.com/images/branding/googlelogo/2x/googlelogo_color_150x54dp.png) no-repeat;-webkit-background-size:100% 100%}}#logo{display:inline-block;height:54px;width:150px}]]>405. That’s an error.The server cannot process the request because it is malformed. It should not be retried. That’s all we know.
awesome awesome
Why didn't you use trainloader to train over batch ?
do we need to rest_hidden_state for predicting future cases?
To me your nn.lstm is taking input with shape (batch, seq_length, input_size), which would require LSTM parameter batch_first = True. However I do not see you explicitly make it True. Also according to pytorch LSTM doc (https://pytorch.org/docs/master/generated/torch.nn.LSTM.html) the shape for h0, c0, which is the elements in your reset_hidden_state function, should be (num_layers, batch, hidden_size), but again you switched what suppose to be batch with seq_length. I am a little confused here, do you mind explaining a bit about this? Thank you very much.
Two quick questions: (1) In the train_model function, line 15 uses: y_pred = model(X_train), however the parameters are "train_data" and "train_label". Those two parameters are never used. Shouldn't train_data be used instead of X_train on line 15 and train_label be used instead of y_train on line 17 of this block? (2) Why is model.train() never called? Is it not necessary? My apologies, I am new to LSTM and deep learning in general. Excellent tutorial btw. Thank you!
I have a question, why when you go to the differences you substitute the Nan, which is the first vale in the DIFFERENCES table with the first value in the DAILY table? If I understood it correctly you are putting a completely different data into your frame, I personally just dropped the first value I do not think it makes a big difference. Another option may be to use the first nonzero value in the Diff table. Great tutorial! Thank you
why use a lstm when the model is supposed to be stateless? whats the use of the LSTM than?
How to find out the accuracy of this model? And what is meant by full training model?
Do you need to reset the hidden state ? I think Pytorch does it automatically itself if not explicitly stated.
Well Thank You, have got some insight from you, implemented all the code using 2 days. Thanks a lot. Its a pandemic data so predictions are not going to be perfect one.
Any Idea of WHO API on CoronaVirus?
U deserve millions watching and subscribers… thanks for sharing this information.. do you have any online course?
Great explanation. In my opinion, you should take last sequence of training data as starting sequence in future prediction.
Instead test_seq = X_all[:1] should be test_seq = X_all[-1:]
Well structured and well explained. Thank you for putting this together.
This is a very interesting application for a interesting type of model. However, I think a typical ARIMA time series statistical model, rather than a neural network, would be much more appropriate in this instance. There just isn't a lot of data for this virus, so an ARIMA model would have the advantage and most likely produce more accurate results. However, this was an extremely informative video, so I do appreciate the upload.
using LSTM to predict this is just pure garbage. no offense, the tutorial is great though.
Lol this model is horrible)
Great Tutorial. Learned a lot. Thank you so much.
Could you comment on why the training loss is so much larger than the testing loss? This is counter intuitive, although perhaps it is explainable by the spike due to the change in CCP policy, which occurs in the training data but not in the testing data. I suppose if you look at the loss contribution from each datetime, you will find the overwhelming majority of the loss in training is due to the policy dummy variable.
Hello, thanks for the tutorial.May I ask a question, in 41:38, how does 512 come?