Training information: Profound Strength Learning For Computerized Dealing Python

In this tutorial, we’ll see an example of deep reinforcement learning for algorithmic trading using BTGym (OpenAI Gym environment API for backtrader backtesting library) and a DQN algorithm from a medium post (link below) to interact with the environment and does the trading.

Access to the code: https://gist.github.com/arsalanaf/d10e0c9e2422dba94c91e478831acb12

Telegram Group: https://t.me/joinchat/DmGkrhIE_g6Mk-zJS6sWgA

Links:
OpenAI Gym: https://gym.openai.com/
BTGym: https://github.com/Kismuz/btgym
backtrader: https://www.backtrader.com/
TensorForce: https://github.com/reinforceio/tensorforce
Bitcoin TensorForce Trading Bot: https://github.com/lefnire/tforce_btc_trader
Self Learning Quant: https://hackernoon.com/the-self-learning-quant-d3329fcc9915
DQN: https://towardsdatascience.com/reinforcement-learning-w-keras-openai-dqns-1eed3a5338c

15 Antworten auf „Training information: Profound Strength Learning For Computerized Dealing Python“

  1. you're ignoring "replay", that's where the whole learning is happening. In "target_train", you're only updating the target network weights. That's doing no learning whatsoever. Good job /facepalm

  2. Hi thks for the tutorial. I have this error: 'ActionDictSpace' object has no attribute 'n'. any idea ? :). the issue seems to come from : model.add(Dense(self.env.action_space.n))…

  3. I know this is not the place to ask error questions. But, have you ran into this error:

    AssertionError:
    State observation shape/range mismatch!
    Space set by env:

    This happens in env.reset().
    Appreciate your help.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.