Arun Jagota
Dec 5, 2020

--

This picture depicts the anatomy of an artificial neuron. The vector of inputs x rips through to the output as follows. First, x is dot-multiplied with the weight vector w. This is w*x = sum_i w_i*x_i. Next, the result is fed through the neuron’s activation function.

The three most well-known activation functions — sigmoid, linear, and step — are shown. Use the linear when modeling an output that varies linearly with the input. Use the sigmoid when modeling an output that is a binary classification (0 or 1) of the input. Avoid the step function.

--

--

Arun Jagota

PhD, Computer Science, neural nets. 14+ years in industry: data science algos developer. 24+ patents issued. 50 academic pubs. Blogs on ML/data science topics.