Not so pointless request:
Does anyone care to help me out on something? A0;I'm trying to build a neural network in Qbasic. A0;I made a little experimental one like this:
A0; Layer 1 A0; A0; Layer 2 A0; A0; Layer 3
A0; -------- -------- --------
I-----W-----N-----W-----N-----W-----N-----W----O
-W- A0; A0;---- A0; -W- A0; A0;---- A0;
---- ----
-W- A0; A0;---- A0; -W- A0; A0;----
I-----W-----N-----W-----N-----W-----N-----W----O
The W's are the weights, and the N's are the nodes (or perceptrons, or artificial neurons, or whatever it is that you might call them), and the O's are outputs and the I's are inputs. A0;I wanted to figure out how to use backpropagation to train the network (in this case train it to produce outputs that were simply the inputs doubled) so I could apply it to a much larger net. A0;My stepdad and I ran through an explanation of it, but I can't code it into my program, because the explanations I found confuse me, mostly because it involves calc, and I'm in algebra 2 (wonderful), but also because it's not very clear how all the equations play into each other. A0;If anyone could explain how to code for backpropagation, that would be great. A0;Heck, if anyone could explain the algorithm itself (keep in mind my level of math) that alone would be great.