Skip to content

Latest commit

 

History

History
7 lines (6 loc) · 634 Bytes

readme.md

File metadata and controls

7 lines (6 loc) · 634 Bytes

Recently, I came across a paper called Random feedback weights support learning in deep neural networks. I found it very fascinating that they were able to match the performance of backpropagation algorithm by propagating the error through random weights! (instead of the same weights they used for forward prop) Yes random weights! The idea is inspired by the biological limitation of the brain to deliver exact backpropagation information.

I decided to replicate this paper and take a closer look for myself. Here is what I found out: http://www.siarez.com/projects/random-backpropogation