Neural Networks Part 1: Inside the black box

NOTE: This StatQuest was supported by these awesome people who support StatQuest at the Double BAM level: T. Nguyen, J. Smith, G Heller-Wagner, J. N. M. Ragaisis, S. Shah, P. Tsou, H.M. Chang, S. Özdemir, J. Horn, D. Sharma, S. Cahyawijaya, A. Eng, F. Prado, J. Malone-Lee, N. Fleming.

4 thoughts on “Neural Networks Part 1: Inside the black box

  1. thank you for the great video. I don’t understand one thing.
    My understanding was that all the nodes would have the same activation function.
    what I don’t understand is why the last node doesn’t have an activation function. Why is it only doing a simple summation ?

    • Actually, there’s no rules on how and where the activations functions are used. They don’t all have to be the same, and not every node needs to have one. The design is really flexible, and you can do it however you want.

Leave a Reply

Your email address will not be published. Required fields are marked *