Open In App

Difference between std::numeric_limits<T> min, max, and lowest in C++

Last Updated : 11 May, 2021
Improve
Improve
Like Article
Like
Save
Share
Report

The std::numeric_limits<T> class in the limit header provides min(), max(), and lowest() function for all numeric data types along with the other member functions.

std::numeric_limits<T>::max(): The std::numeric_limits<T>::max() for any type T gives the maximum finite value representable by the numeric type T. So, the function max() gives a value x for a data type T such that there is no other finite value y where y > x.

For both, integer types and floating-point data types the function max() gives the largest value that can be represented and there is no other value that lies to the right of this value on the number line.

std::numeric_limits<T>::lowest(): The std::numeric_limits<T>::lowest() for any type T is the lowest finite value representable by the numeric type T, such that there is no other finite value y where y > x.

For both, integer types and floating-point data types the function lowest() gives the least value that can be represented and there is no other value that lies to the left of this value on the number line. The function lowest() is basically the negative value of max().

std::numeric_limits<T>::min(): The std::numeric_limits<T>::min() for any type T is the minimum finite value representable by the numeric type T. So, the function min() the tiniest possible value that can be represented by type T.

For floating-point types with denormalization, the function min() returns the minimum positive normalized value. As the function min() returns the minimum positive normalized value for floating-point types the exponent of the value can not be 0.

To get the minimum positive denormal value use std::numeric_limits<T>::denorm_min(). denorm_min() is only for floating point types and for integer types it gives 0.

For integer types min() an alias of the function lowest(), which means both give the same lowest finite value. If min() of integer values had been defined similar to floating-point types, then that value would have been 1.

For Example:

Type T Byte size Function Binary representation Value
int  4 max() 01111111111111111111111111111111 2147483647
lowest() 10000000000000000000000000000000 -2147483648
min() 10000000000000000000000000000000 -2147483648
denorm_min() 00000000000000000000000000000000 0
float 4 max() 0 11111110 11111111111111111111111 3.40282346639e+38
lowest() 1 11111110 11111111111111111111111 -3.40282346639e+38
min() 0 00000001 00000000000000000000000 1.17549435082e-38
0 00000000 00000000000000000000001 1.40129846432e-45
denorm_min()

Below is the program to illustrate the above concepts:

C++




// C++ program to illustrate difference
// between the numeric limits min, max,
// and the lowest
  
#include <bitset>
#include <iostream>
#include <limits>
using namespace std;
  
// Driver Code
int main()
{
    int x;
  
    // Size of int
    cout << "int " << sizeof(int)
         << " bytes" << endl;
  
    // numeric_limits<int>::max()
    x = numeric_limits<int>::max();
    cout << "\tmax :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << x << endl;
  
    // numeric_limits<int>::lowest()
    x = numeric_limits<int>::lowest();
    cout << "\tlowest :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << x << endl;
  
    // numeric_limits<int>::min()
    x = numeric_limits<int>::min();
    cout << "\tmin :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << x << endl;
  
    // numeric_limits<int>::denorm_min()
    x = numeric_limits<int>::denorm_min();
    cout << "\tdenorm_min :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << x << endl;
  
    cout << endl;
  
    // Size of float
    cout << "float " << sizeof(float)
         << " bytes" << endl;
    float f;
  
    // numeric_limits<int>::max()
    f = numeric_limits<float>::max();
  
    // read the binary representation
    // of the float as an integer
    x = *(int*)&f;
  
    cout << "\tmax :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << f << endl;
  
    // numeric_limits<int>::lowest()
    f = numeric_limits<float>::lowest();
    x = *(int*)&f;
  
    cout << "\tlowest :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << f << endl;
  
    // numeric_limits<int>::min()
    f = numeric_limits<float>::min();
    x = *(int*)&f;
    cout << "\tmin :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << f << endl;
  
    // numeric_limits<int>::denorm_min()
    f = numeric_limits<float>::denorm_min();
    x = *(int*)&f;
    cout << "\tdenorm_min :" << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << f << endl;
  
    return 0;
}


Output:

int 4 bytes
    max :
01111111111111111111111111111111
2147483647
    lowest :
10000000000000000000000000000000
-2147483648
    min :
10000000000000000000000000000000
-2147483648
    denorm_min :
00000000000000000000000000000000
0

float 4 bytes
    max :
01111111011111111111111111111111
3.40282e+38
    lowest :
11111111011111111111111111111111
-3.40282e+38
    min :
00000000100000000000000000000000
1.17549e-38
    denorm_min :
00000000000000000000000000000001
1.4013e-45

Note: In C++ the default precision is 6, which means up to 6 significant digits are used to represent the number and for that reason, the actual number will be not the same as the printed value. To get the actual value to try to set higher precision.

Below is the program to illustrate the same:

C++




// C++ program to illustrate the
// above concepts
  
#include <bitset>
#include <iomanip>
#include <iostream>
#include <limits>
using namespace std;
  
// Driver Code
int main()
{
    // Output is not exact
    cout << "\tlowest :" << endl
         << numeric_limits<float>::lowest()
         << endl;
  
    // The value from the output
    float f = -3.40282e+38;
    int x = *(int*)&f;
    cout << bitset<8 * sizeof(x)>(x)
         << endl
         << endl;
  
    // Correct value
    f = numeric_limits<float>::lowest();
    x = *(int*)&f;
    cout << "\tnumeric_limits<float>::lowest() :"
         << endl
         << bitset<8 * sizeof(x)>(x)
         << endl
         << endl;
  
    cout << "\tusing setprecision:"
         << endl;
  
    // output is more precise
    cout << setprecision(10) << f << endl;
  
    // The value from the output
    f = -3.402823466e+38;
  
    // Read the binary representation
    // of the float as an integer
    x = *(int*)&f;
  
    cout << bitset<8 * sizeof(x)>(x)
         << endl;
  
    return 0;
}


Output:

lowest :
-3.40282e+38
11111111011111111111111111101110

    numeric_limits::lowest() :
11111111011111111111111111111111

    using setprecision:
-3.402823466e+38
11111111011111111111111111111111


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads