Skip to content
Related Articles

Related Articles

Introduction of Probabilistic Computing

Improve Article
Save Article
Like Article
  • Last Updated : 05 Aug, 2020

In recent years, volume of  information collected in businesses, in science or in Internet is exploded and computers are increasingly called upon to help people to interpret and act on all of that data.

For example,  what do local temperature data tells us about global climate system, what do web surfing and purchases tell us about consumers and how can genetic data be used to personalize medical treatment.

Well, these all recognizable problems seems different from one and another but they are actually similar. They all require inductive inference, generalizing from observations of world back to their underline causes. Computers we used are not designed for it and are not good at it. WHY ?

Computers was originally designed to solve scientific and technical problems but we soon started using it for business needs. Since its arrival, Computers became communication and entertainment devices as well. But what happens when computers are asked to make sense of data ?

A computer can be thought of as machine that executes set of instructions that tells it how to transform inputs into outputs. There are two ways in which computers are used to interpret or understand data- Simulation and Inference.

In Simulation, machine starts with some background assumptions it takes as input configuration of world and produces as output an observed trajectory. It is easy as machine is executing instructions in same direction as process it is modelling from causes to affects.

Inference is reverse problem. Machine starts with same background assumptions but takes as input observed trajectory and produces as output, configuration of world that explains it. Here, machine has to go from facts back to their probable causes. One common challenge in making inferences about data is there are usually many possible explanations for particular output. In other words, there is fundamental uncertainty about which explanation is correct. This uncertainty is an essential issue when explaining and interpreting data. It is so prevalent that we cannot expect absolute certain answers about data but we can make good guesses that incorporate much knowledge as possible.

Good guesses balance consistency with background knowledge and fit to data, without introducing unnecessary complexities. In traditional computing, instructions for inferences are harder to write than those for simulation because we typically gather code in scientific behaviour and technical knowledge in terms of how one thing causes another. It is therefore much easier to make use of knowledge in that direction than inreverse. But, these reverse or inference applications are often more valuable, both to business and society.

Probabilistic computers automatically transforms simulation instructions into inference programs and manages uncertainty about casual explanations. These are machines designed to interpreter.

My Personal Notes arrow_drop_up
Recommended Articles
Page :

Start Your Coding Journey Now!