Tensorflow is an open-source machine learning library developed by Google. One of its applications is to develop deep neural networks.
tensorflow.nn provides support for many basic neural network operations.
An activation function is a function which is applied to the output of a neural network layer, which is then passed as the input to the next layer. Activation functions are an essential part of neural networks as they provide non-linearity, without which the neural network reduces to a mere logistic regression model. One of the many activation functions is the Softplus function which is defined as .
Traditional activation functions such as sigmoid and hyperbolic tangent have lower and upper bounds, whereas the softplus function outputs in the range (0, ∞). The derivative of the softplus function comes out to be , which is the sigmoid function. The softplus function is quite similar to the Rectified Liner Unit (ReLU) function, with the main difference being softplus function’ differentiability at the x = 0. The research paper “Improving deep neural networks using softplus units” by Zheng et al. (2015) suggests that softplus provides more stabilization and performance to deep neural networks than ReLU function. However, ReLU is generally preferred because of the ease in calculating it and its derivative. Calculation of activation function and its derivative is a frequent operation in neural networks, and ReLU provides faster forward and backward propagation when compared with softplus function.
math.softplus] provides support for softplus in Tensorflow.
Syntax: tf.nn.softplus(features, name=None) or tf.math.softplus(features, name=None)
features: A tensor of any of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
name (optional): The name for the operation.
Return type: A tensor with the same type as that of features.
Input type: Tensor("Const:0", shape=(6, ), dtype=float32) Input: [ 1. -0.5 3.4000001 -2.0999999 0. -6.5 ] Return type: Tensor("softplus:0", shape=(6, ), dtype=float32) Output: [ 1.31326163e+00 4.74076986e-01 3.43282866e+00 1.15519524e-01 6.93147182e-01 1.50233845e-03]
Code #2: Visualization
Input: [-5. -4.28571429 -3.57142857 -2.85714286 -2.14285714 -1.42857143 -0.71428571 0. 0.71428571 1.42857143 2.14285714 2.85714286 3.57142857 4.28571429 5. ] Output: [ 0.00671535 0.01366993 0.02772767 0.05584391 0.11093221 0.21482992 0.39846846 0.69314718 1.11275418 1.64340135 2.25378936 2.91298677 3.59915624 4.29938421 5.00671535]
Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.
To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course.
- Python | Classify Handwritten Digits with Tensorflow
- Python | Tensorflow cos() method
- Python | Tensorflow nn.sigmoid()
- Python | Tensorflow nn.relu() and nn.leaky_relu()
- Python | Tensorflow nn.tanh()
- Python | Creating tensors using different functions in Tensorflow
- Python | Tensorflow sin() method
- Python | Tensorflow atan() method
- Python | Tensorflow tan() method
- Python | Tensorflow cosh() method
- Python | Tensorflow sinh() method
- Python | Tensorflow asin() method
- Python | Tensorflow acos() method
- Python | Tensorflow reciprocal() method
- Python | Tensorflow exp() method
- Python | Tensorflow log1p() method
- Python | Tensorflow log() method
- Python | Tensorflow logical_and() method
- Python | Tensorflow logical_xor() method
- Python | Tensorflow logical_or() method
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : sanskar27jain