Keras Tutorial #4 that in fact LSTM Content Era

This video is about building a model that can generate text using Keras. We are using an LSTM network to generate the text.

Please subscribe. That would make me happy and encourage me to keep making my content better and better.

The code for this video:

Video Rating: / 5

Time series data, in today’s age, is ubiquitous. With the emerge of sensors, IOT devices it is spanning over all the modern aspects of life from basic household devices to self-driving cars affecting all for lives. Thus classification of time series is of unique importance in current time. With the advent of deep learning techniques , there have been influx of focus on Recurrent Neural Nets (RNN) in solving tasks related with sequence and rightly so. In this talk, I would attempt to describe the reason for success of RNN’s in sequence data. Eventually we would divert towards other techniques which should be looked into when working on such problems. I will phrase examples from healthcare domain and delve into some of the other usefull techniques that can be used from Deep Learning Domain and their usefullness.

Aditya Patel is the head of data science at Stasis and has 7+ years of experience spanning over the fields of Machine Learning and Signal Processing. He graduated with Dual Master’s degree in Biomedical and Electrical Engineering from University of Southern California. He has presented his work in Machine learning at multiple peer reviewed conferences concerning healthcare domain, across the geography. He also contributed to first generation “Artificial Pancreas” project in Medtronic, Los Angeles. In his current role he is leading the advent of smart hospitals in Indian healthcare.
Video Rating: / 5

16 Antworten auf „Keras Tutorial #4 that in fact LSTM Content Era“

  1. Hi Tanner. In keras we have to function for tokens when we preprocess text. texts_to_matrix and texts_to_sequence. Could you explain in detail when to use what? Maybe it's a topic for a video. Preprocessing is always the hardest part and I would love to see more on this. Thanks a lot and best regards

  2. Ok I found out:

    in the written version this 2 line are missing:

    chars = sorted(list(set(text)))
    print('total chars: ', len(chars))

    So i thought, that you just (accidently) switched from "text" to "chars". So I replaced the "chars"-variables in the rest of the code to "text". No wonder it fucked up my Pc 😀

  3. This line kills my machines (my pc with 16GB Ram, and google COLAB):

    x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)

    how did you ran the whole book?

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.