In this article, we are going to discuss Toivonen’s algorithm in data analytics.

**Toivonen’s algorithm :**

It uses fickleness in a different way from the simple sampling algorithm. This algorithm, given adequate main memory, will use one pass over a small sample and one full pass over the data. It will neither give false negatives nor positives, but there is a little but finite prospect that it will fail to produce any response at all. In that instance, it needs to be replicated until it gives a response. In this algorithm, before it produces the average number of passes needed and only the frequent itemsets is a small constant.

- Toivonen’s algorithm embark by selecting a small sample of the input dataset and finding from it the candidate frequent itemsets.
- The procedure of the concerned algorithm is exactly the same as the simple randomized algorithm, except that it is essential in this algorithm to set the threshold to something less than proportional value.
- That is if the support threshold for the complete dataset is s, and the sample magnitude is fraction p, then when looking for frequent itemsets in the sample, use a threshold as 0.9ps or 0.8ps.
- The smaller we make the threshold, the more the main memory is needed for computing all itemsets that are frequent in the sample, but the more likely we are to circumvent the situation when the algorithm breaks to provide the response.
- Having assembled the collection of frequent itemsets for the sample, we next set up the negative border. This is the collection of itemsets that are not frequent in the sample, but all of their instant subsets (subsets erected by removing exactly one item) are frequent in the sample.

To conclude Toivonen’s algorithm, we make a pass through the entire data set, counting all the itemsets that are frequent in the sample or are under the negative border.

**There are two viable consequence :**

- In the complete dataset, no member of the negative border is frequent. In this instance, the accurate set of frequent itemsets is the same as those itemsets from the sample that were marked to be frequent in the total.
- Few members in the complete dataset of the negative border are frequent. In this case, we cannot get sure that there are not some even larger sets, in neither the negative border nor the collection of frequent itemsets for the sample, that are also frequent in the whole. Thus, we cannot give responses at this time and must recurrent the algorithm with a newly discovered random sample.

**Why Toivonen’s Algorithm Works :**

Plainly Toivonen’s algorithm not ever constructs a false positive, since it only describes as frequent to those itemsets that have been counted and found to be frequent in the total. To contend that it never put together a false negative, we must show that when no morgan of the negative border is frequent in the whole, then there can be no itemset anything that is as follows.

- It concluded as frequent in the complete itemset.
- But in neither the negative border nor the collection of frequent itemsets for the given sample.

Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the **CS Theory Course** at a student-friendly price and become industry ready.