MachineLearning/dqn library
Deep Q-Network (DQN) - minimal numeric example
A compact DQN wrapper that uses the project's ANN MLP as a function
approximator. This implementation is intentionally small and educational —
it demonstrates replay memory, epsilon-greedy actions, and periodic updates
so it can be used in examples and unit tests without GPU dependencies.
Notes:
- States are represented as List
- Actions are integer indices
Classes
- DQN
- DQN wrapper with optional target network and configurable replay buffer