Skip to content

Commit

Permalink
Updated v.2.2.1
Browse files Browse the repository at this point in the history
Renamed deltaValue to deltaWeight.
  • Loading branch information
Kalvar committed Jun 5, 2017
1 parent 119d62d commit 49e0be1
Show file tree
Hide file tree
Showing 4 changed files with 14 additions and 14 deletions.
4 changes: 2 additions & 2 deletions KRMLP.podspec
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
Pod::Spec.new do |s|
s.name = "KRMLP"
s.version = "2.2.0"
s.version = "2.2.1"
s.summary = "Deep Learning for multi-layer perceptrons neural network (MLP)."
s.description = <<-DESC
Machine Learning (マシンラーニング) in this project, it implemented multi-layer perceptrons neural network (ニューラルネットワーク) and Back Propagation Neural Network (BPN). It designed unlimited hidden layers to do the training tasks. This network can be used in products recommendation (おすすめの商品), user behavior analysis (ユーザーの行動分析), data mining (データマイニング) and data analysis (データ分析).
Expand All @@ -10,7 +10,7 @@ Pod::Spec.new do |s|
s.author = { "Kalvar Lin" => "ilovekalvar@gmail.com" }
s.social_media_url = "https://twitter.com/ilovekalvar"
s.source = { :git => "https://github.com/Kalvar/ios-Multi-Perceptron-NeuralNetwork.git", :tag => s.version.to_s }
s.platform = :ios, '7.0'
s.platform = :ios, '9.0'
s.requires_arc = true
s.public_header_files = 'ML/**/*.h'
s.source_files = 'ML/**/*.{h,m}'
Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Copyright (c) 2013 - 2016 Kuo-Ming Lin (Kalvar Lin)
Copyright (c) 2013 - 2017 Kuo-Ming Lin (Kalvar Lin)

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
Expand Down
10 changes: 5 additions & 5 deletions ML/Optimizations/Inertia/KRMLPInertia.m
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ - (instancetype)init

- (double)deltaWeightAtIndex:(NSInteger)weightIndex net:(KRMLPNet *)net mappedOutput:(double)mappedOutput learningRate:(double)learningRate
{
double deltaValue = learningRate * net.deltaValue * mappedOutput;
double deltaWeight = learningRate * net.deltaValue * mappedOutput;
if( net.updatedTimes > 0 )
{
double lastDeltaWeight = [[net.lastDeltaWeights objectAtIndex:weightIndex] doubleValue];
Expand Down Expand Up @@ -71,24 +71,24 @@ - (double)deltaWeightAtIndex:(NSInteger)weightIndex net:(KRMLPNet *)net mappedOu
{
dynamicLearningRate = 0.0f;
}
deltaValue += dynamicLearningRate * lastDeltaWeight;
deltaWeight += dynamicLearningRate * lastDeltaWeight;
}
break;
case KRMLPInertialRProp: // Todolist, the RProp is better than QuickProp, since QuickProp has overfitting problem.
case KRMLPInertialFixedRate:
default:
// delta w(ji) = L * delta value * y + fixed inertial rate * last delta w(ji)
deltaValue += _inertialRate * lastDeltaWeight;
deltaWeight += _inertialRate * lastDeltaWeight;
break;
}
}
else
{
// 是初次更新權重的狀態
// 直接照 Backpropagation 原始未套入優化公式的方式去做即可
// deltaValue = learningRate * net.deltaValue * mappedOutput;
// deltaWeight = learningRate * net.deltaValue * mappedOutput;
}
return deltaValue;
return deltaWeight;
}

@end
12 changes: 6 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ Machine Learning (マシンラーニング) in this project, it implemented mult
#### Podfile

```ruby
platform :ios, '7.0'
pod "KRMLP", "~> 2.2.0"
platform :ios, '9.0'
pod "KRMLP", "~> 2.2.1"
```

## How to use
Expand Down Expand Up @@ -275,15 +275,15 @@ QuickProp:

## Version

V2.2.0
V2.2.1

## License

MIT.

## Todolist

1. RProp.
2. Mixes fixed inertia and QuickProp.
1. RMSProp.
2. Adam.
3. EDBD.
4. Protocol implementations.
4. Nadam.

0 comments on commit 49e0be1

Please sign in to comment.