Open In App

What does it mean for a Machine to Think?

Last Updated : 22 Jun, 2020
Improve
Improve
Like Article
Like
Save
Share
Report

Thinking as an activity can be quite an ambiguity to explore. Dealing with apparently an incomprehensible span of tasks performed by the human brain or any other brain for that matter, it has been subject to discussion for centuries. The following text seeks to define the context of ‘think’ for a self-aware Artificial Intelligence. Here, by self-awareness, we mean an entity that can choose its own, preferred course of action and carry out activities independently.

When a machine (or AI) is designed to conduct a task involving cognitive skills, its performance is measured, naturally, by comparing it to a human, namely tasks like arriving at decisions without external intervention, being able to perform complex day-to-day tasks, learning and improving from past experiences, since we consider ourselves the benchmark of intelligence. But what one might overlook is the fact that the majority of these actions that a person performs, however trivial they may seem, stems from an intention/instinct, a sense of purpose which humans or animals do have as opposed to machines which borrow the purposes of their creators.

The motivation behind this purpose might vary with individual yet there is a certain set of desires/fears/instincts that is common to all of us, e.g. fear of death, desire for food and shelter, giving birth to an offspring, gaining societal stature and some other primal instincts which have been inherent to us and other primates over the generations. Both consciously and subconsciously, most of our thought processes and consequently actions, we believe, aim towards accomplishing these. Of course, there also are acquired belief systems, desires, and ethics that one develops with routine experience. They impel us to make and act on a lot of decisions in life like I love to play chess, some people don’t, some people may believe in a religion that I do not. I may desire to create works of poetry or fear the fall of stock prices and can act accordingly. Apart from these, some thoughts are involuntary. They don’t seek to fulfill a conscious motive. People dream in their sleep, worry about getting old, wonder over the origins of the universe, and do other things that don’t have a purpose. These sets of desires fears and belief systems are what largely separate our mind from a machine’s, i.e. when we ‘think’, our mind tends to strive towards fulfilling one of these. Now, what about a machine/computer?

A computer inherently is a tool that follows the instructions provided by a programmer, step-by-step. It won’t have an intent unless its creator explicitly provides it with one. But these ‘instincts’ I’ve talked about are an outcome of genetic, psychological, and environmental factors that we carry or have picked up over time. Moreover, several aspects of the mind like consciousness and dreams, not even properly understood by us, also influence our thought processes.

So, how can we program attributes of the human brain in a computer we don’t even understand ourselves?

Can a computer ever experience low self-esteem, feel regret over installing a crappy Windows update, or slip into a dream during standby mode? This is the very domain where most critics of AI deny the very possibility of it ever matching humans.

Most probably, we cannot know, at least not until we make substantial progress in the areas of psychology and neuroscience. But we can make a bargain, we can make use of these ‘instincts’ in making the machines complete tasks or make judgments. We can simulate the course of actions that a human would follow to do a similar job, as we shall discuss in the upcoming section. The nature of the cognitive abilities required by a machine, in my opinion, depends upon the nature of the job. If one venture to judge the required form of approach a machine or in the present context, a Digital Computer needs to follow, the subject of discussion boils down into two situations:

1. Tasks involving rational-approach and logical reasoning: This section considers the problems whose solution requires a logical approach, e.g. solving a mathematical equation, finding the shortest route in Google Maps, recognizing the severity of a breast cancer cell from a mammography report, and other situations where one has to find the most optimal solution. If no one solution exists, a probabilistic distribution of more than one might do, but in any case, there is no point of getting influenced by emotions and external factors. It seeks to choose the most rational path to solve any problem. Unlike us, it is devoid of emotional constraints and societal barriers. The performance of an AI in such tasks determines its rational aptitude. Domains of image/voice recognition, natural language processing, etc. fall under this.

2. Tasks subject to human perspective involving emotional instincts: This section is a more complex one. The majority of decisions and jobs we are required to carry out are constrained by our ethics, feelings, and societal boundaries we talked about earlier. I also mentioned how some of these are impossible to explicitly code into a computer. Tasks like these could include creating art, judging the optimal punishment of a convicted criminal, deciding whether to help a needy, etc. But a possible breakthrough can occur if we consider the following rule,

The emotional intelligence an artificial entity can be defined as a measure of what it can exhibit in its actions rather than what it experiences.

Instead of pondering over whether a machine can ‘experience’ the emotional or sub-conscious instincts like we humans do, we can simply program the implications those instincts would have to perform the required jobs. Factors involving social acceptance and morality that are hard to document down can be programmed to be ‘learned’ over time by interacting with others like a machine learning model is trained. Other instincts like feelings, desires, and fears that can’t be made to be directly ‘felt’ can be hard-coded to be such that it will influence the actions of the AI, the way programmer wants to. For e.g, a self-aware rover doesn’t go near the cliff because that increases its probability of crashing or an autonomous car stops before a crossing to ensure, but all that will be the consequence of the constraints programmed into it and the habits it has acquired by interacting with its neighbors.

Hence, we can sum up as such that emotional instincts that we can’t make a machine to experience for itself can be molded into its actions in such a way that for a third observer the machine is an emotionally intelligent entity making rational decisions while considering ethical and societal aspects, depending on the nature of the situation it is handling. But even an AI of such capability still is a product of the intents and commands of its programmer. Its independence and self-awareness is an illusion. Under provided guidelines and resources, it is interacting with its surroundings and seeking to find the most viable course of action to follow the command of its creator while keeping in mind the emotional and societal constraints.


Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads