Sixth Sense Technology

Last Updated : 15 Jul, 2025

Steve Mann is known as the "Father of Sixth Sense Technology", who made a wearable computing device in 1990. People know that the way of integration between humans and robots has been developing using various technologies. Controlling the robotic vehicles using gestures is a predominant way of enhancing integration. Human gesture enhances human-robot interaction by making it independent from input devices. The built-in camera within the system is used to capture real-time video of gestures. It associates technologies like hand gesture recognition, image capturing, processing, and manipulation. Color markers are used to elucidate the gesture movements. For sending the gesture commands, a wireless Zigbee series 2 module is generally used.

sixth_sense_technology
Sixth Sense Technology

In this article, we will explore the core components of Sixth Sense Technology. We will dive into the working procedure, detailing how gestures are captured, processed, and translated into robotic movements using tools like Arduino and MATLAB. Additionally, we will discuss the applications, advantages, and limitations of this technology, as well as potential future enhancements that could further revolutionize human-robot interaction. Let's Begin!

Components in the Sixth Sense

  1. Camera - Key input device, art as a digital eye. Able to capture input by user gestures.
  2. Projector - Key output device that projects digital information from the device onto the surface or wall.
  3. Mobile Computing device - Microsoft-enabled device and used as the brain of the setup. It is pre-programmed to complete the task.
  4. Mirror - It reflects the projection to the perfect desirable place
  5. Microphone - Required when using a paper. Used as a setup.
  6. Coloured Markers - In figures, used as gestures or simply clicking images.

Working Procedure in the Sixth Sense

  • Zigbee coordinate is connected to the serial port of the system through a USB explorer board.
  • An Arduino board is used. The user wears colored taps to provide input gestures to the system.
  • The camera captures real-time video at a fixed frame rate and resolution, which is determined by the hardware of the camera.
  • Color plays an important role in image processing. Each image is composed of an array of M*N pixels with M rows and N columns of pixels. Each pixel maintains a certain value for primary colors like red, green, and blue. Based on the threshold values, RGB colors can be differentiated. In the color detection process, the required color can be detected from an image.
  • In order to control the robotic vehicle according to the input gases given by the user Arduino Uno board is used.
  • Serial communications between Arduino and MATLAB are established wirelessly through Zigbee.
  • The RGB image is converted into a grayscale image. It converts the true color image RGB to a gray-scale image.
  • From the RGB image, the required color (red) is extracted by subtracting the image. The red, green, and blue color objects are detected by subtracting the RGB super-hit channel from the grayscale image. The grey region in the image obtained after subtraction needs to be converted to a binary image for finding the region of the detected object.
  • The conversion to binary is required to find the properties of a monochromatic image.
  • For the user to control the mouse pointer it is necessary to determine a point phone whose coordinates can be sent to the cursor. With these coordinates, the system can perform robotic movements.
  • The centroid is calculated for the detected region. The output function is a matrix consisting of the X and Y coordinates of the centroid. The number of centroids is transmitted to the Zigbee coordinated via the COM port of the system.
  • Zigbee routers present in the wireless network receive data from co-ordinate and it transmits to the Arduino.
  • The Arduino transmits a command to robotic vehicles. Appropriate commands are used for movements like forward, reverse, turning left and right are executed.

Applications

  1. Viewing map(call a map)
  2. Check time
  3. Create a multimedia reading experience
  4. Taking pictures
  5. Drawing applications(zoom in zoom out using hand movements)
  6. Making calls and interacting with physical objects
  7. Flight updates, Getting product information

Advantages of Sixth Sense Technology

  1. Portable
  2. Support Multi-touch and multi-user interaction
  3. Cost-effective because a device cost up-to $300
  4. We can directly access data from machines in real-time approach
  5. Mind map the idea anywhere
  6. Open-source software

Disadvantages of Sixth Sense Technology

  1. Hardware limitations of the devices, that we currently carry with us
  2. For example, many phones will not permit the external camera feed to be manipulated in real-time
  3. Post-processing can occur sometimes

Future Enhancements

There are many scopes of further modifications to this technology:

  1. To get rid of color markers
  2. To embedded camera and projector inside mobile and computing devices
  3. To have 3D gesture tracking
  4. To make Sixth sense work as a fifth sense for disabled people

Must Read

Conclusion

The technology uses a wearable gestural interface that superimposes the material world with the digital world. Other than eye, tongue, nose, ear, gesture, this technology uses an extra six sensory perceptions to interact with the material or physical world. Sixth sense technology will remove the task of carrying tedious laptops and other heavy devices. The connection between the physical world and the digital world is a huge milestone achieved by mankind.

Comment