Skip to content

Commit

Permalink
Fix typo, update readme & minor efficiency improvement
Browse files Browse the repository at this point in the history
  • Loading branch information
Mike Campbell committed Nov 22, 2017
1 parent 14cd34a commit 342a259
Show file tree
Hide file tree
Showing 3 changed files with 7 additions and 8 deletions.
9 changes: 4 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,17 +34,16 @@ ruby examples/xor.rb

## TODO

So much. So much.
So much.

- Matrix calculations (WIP)
- Convenience methods for setting up standard network topologies, crucially,
layers
- Batch normalization/drop out/early stopping
layers (WIP)
- Batch normalization/drop out/early stopping (WIP, dep. on matrix)
- Hyperparameter optimisation
- Other adaptive learning rate algorithms (Adadelta, Adam, etc?)
- Explore matrix operations and other ways to optimise performance of algorithms
- RPROP?
- Use enumerable-statistics gem?
- Speed up by adding a reduce step to the parallel gem?
- More examples
- Tests

Expand Down
4 changes: 2 additions & 2 deletions lib/rann/backprop.rb
Original file line number Diff line number Diff line change
Expand Up @@ -118,11 +118,11 @@ def self.run_single network, inputs, targets
# remove this push mechanism, shouldn't be necessary and uses extra memory.
incoming_deltas = Hash.new{ |h, k| h[k] = Hash.new{ |h, k| h[k] = [] } }

intial_timestep = inputs.size - 1
initial_timestep = inputs.size - 1
connection_stack =
network.output_neurons
.flat_map{ |n| network.connections_to n }
.map{ |c| [c, intial_timestep] }
.map{ |c| [c, initial_timestep] }

# maybe change this to traverse the static network timestep times if this
# proves too difficult to rationalise
Expand Down
2 changes: 1 addition & 1 deletion lib/rann/neuron.rb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
module RANN
class Neuron
ACTIVATION_FUNCTIONS = {
sig: ->(v){ 1.to_d.div(1 + (Math::E ** -v), 10) },
sig: ->(v){ 1.to_d.div(1 + Math::E.to_d.power(-v, 10), 10) },
tanh: ->(v){ Math.tanh(v).to_d(10) },
relu: ->(v){ [0.to_d, v].max },
linear: ->(v){ v },
Expand Down

0 comments on commit 342a259

Please sign in to comment.