Difference between std::numeric_limits<T> min, max, and lowest in C++
Last Updated :
11 May, 2021
The std::numeric_limits<T> class in the limit header provides min(), max(), and lowest() function for all numeric data types along with the other member functions.
std::numeric_limits<T>::max(): The std::numeric_limits<T>::max() for any type T gives the maximum finite value representable by the numeric type T. So, the function max() gives a value x for a data type T such that there is no other finite value y where y > x.
For both, integer types and floating-point data types the function max() gives the largest value that can be represented and there is no other value that lies to the right of this value on the number line.
std::numeric_limits<T>::lowest(): The std::numeric_limits<T>::lowest() for any type T is the lowest finite value representable by the numeric type T, such that there is no other finite value y where y > x.
For both, integer types and floating-point data types the function lowest() gives the least value that can be represented and there is no other value that lies to the left of this value on the number line. The function lowest() is basically the negative value of max().
std::numeric_limits<T>::min(): The std::numeric_limits<T>::min() for any type T is the minimum finite value representable by the numeric type T. So, the function min() the tiniest possible value that can be represented by type T.
For floating-point types with denormalization, the function min() returns the minimum positive normalized value. As the function min() returns the minimum positive normalized value for floating-point types the exponent of the value can not be 0.
To get the minimum positive denormal value use std::numeric_limits<T>::denorm_min(). denorm_min() is only for floating point types and for integer types it gives 0.
For integer types min() an alias of the function lowest(), which means both give the same lowest finite value. If min() of integer values had been defined similar to floating-point types, then that value would have been 1.
For Example:
Type T |
Byte size |
Function |
Binary representation |
Value |
int |
4 |
max() |
01111111111111111111111111111111 |
2147483647 |
lowest() |
10000000000000000000000000000000 |
-2147483648 |
min() |
10000000000000000000000000000000 |
-2147483648 |
denorm_min() |
00000000000000000000000000000000 |
0 |
float |
4 |
max() |
0 |
11111110 |
11111111111111111111111 |
3.40282346639e+38 |
lowest() |
1 |
11111110 |
11111111111111111111111 |
-3.40282346639e+38 |
min() |
0 |
00000001 |
00000000000000000000000 |
1.17549435082e-38 |
0 |
00000000 |
00000000000000000000001 |
1.40129846432e-45 |
denorm_min() |
Below is the program to illustrate the above concepts:
C++
#include <bitset>
#include <iostream>
#include <limits>
using namespace std;
int main()
{
int x;
cout << "int " << sizeof ( int )
<< " bytes" << endl;
x = numeric_limits< int >::max();
cout << "\tmax :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< x << endl;
x = numeric_limits< int >::lowest();
cout << "\tlowest :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< x << endl;
x = numeric_limits< int >::min();
cout << "\tmin :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< x << endl;
x = numeric_limits< int >::denorm_min();
cout << "\tdenorm_min :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< x << endl;
cout << endl;
cout << "float " << sizeof ( float )
<< " bytes" << endl;
float f;
f = numeric_limits< float >::max();
x = *( int *)&f;
cout << "\tmax :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< f << endl;
f = numeric_limits< float >::lowest();
x = *( int *)&f;
cout << "\tlowest :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< f << endl;
f = numeric_limits< float >::min();
x = *( int *)&f;
cout << "\tmin :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< f << endl;
f = numeric_limits< float >::denorm_min();
x = *( int *)&f;
cout << "\tdenorm_min :" << endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< f << endl;
return 0;
}
|
Output:
int 4 bytes
max :
01111111111111111111111111111111
2147483647
lowest :
10000000000000000000000000000000
-2147483648
min :
10000000000000000000000000000000
-2147483648
denorm_min :
00000000000000000000000000000000
0
float 4 bytes
max :
01111111011111111111111111111111
3.40282e+38
lowest :
11111111011111111111111111111111
-3.40282e+38
min :
00000000100000000000000000000000
1.17549e-38
denorm_min :
00000000000000000000000000000001
1.4013e-45
Note: In C++ the default precision is 6, which means up to 6 significant digits are used to represent the number and for that reason, the actual number will be not the same as the printed value. To get the actual value to try to set higher precision.
Below is the program to illustrate the same:
C++
#include <bitset>
#include <iomanip>
#include <iostream>
#include <limits>
using namespace std;
int main()
{
cout << "\tlowest :" << endl
<< numeric_limits< float >::lowest()
<< endl;
float f = -3.40282e+38;
int x = *( int *)&f;
cout << bitset<8 * sizeof (x)>(x)
<< endl
<< endl;
f = numeric_limits< float >::lowest();
x = *( int *)&f;
cout << "\tnumeric_limits<float>::lowest() :"
<< endl
<< bitset<8 * sizeof (x)>(x)
<< endl
<< endl;
cout << "\tusing setprecision:"
<< endl;
cout << setprecision(10) << f << endl;
f = -3.402823466e+38;
x = *( int *)&f;
cout << bitset<8 * sizeof (x)>(x)
<< endl;
return 0;
}
|
Output:
lowest :
-3.40282e+38
11111111011111111111111111101110
numeric_limits::lowest() :
11111111011111111111111111111111
using setprecision:
-3.402823466e+38
11111111011111111111111111111111
Like Article
Suggest improvement
Share your thoughts in the comments
Please Login to comment...