This picture depicts the anatomy of an artificial neuron. The vector of inputs x rips through to the output as follows. First, x is dot-multiplied with the weight vector w. This is w*x = sum_i w_i*x_i. Next, the result is fed through the neuron’s activation function.
The three most well-known activation functions — sigmoid, linear, and step — are shown. Use the linear when modeling an output that varies linearly with the input. Use the sigmoid when modeling an output that is a binary classification (0 or 1) of the input. Avoid the step function.