Open In App

Python | Remove duplicates from nested list

Last Updated : 15 May, 2023
Improve
Improve
Like Article
Like
Save
Share
Report

The task of removing duplicates many times in the recent past, but sometimes when we deal with the complex data structure, in those cases we need different techniques to handle this type of problem. Let’s discuss certain ways in which this task can be achieved.

Method #1 : Using sorted() + set() 

This particular problem can be solved using the above functions. The idea here is to sort the sublist and then remove the like elements using the set operations which removes duplicates.

Python3




# Python3 code to demonstrate
# removing duplicate sublist
# using set() + sorted()
 
# Initializing list
test_list = [[1, 0, -1], [-1, 0, 1], [-1, 0, 1],
             [1, 2, 3], [3, 4, 1]]
 
# Printing original list
print("The original list : " + str(test_list))
 
# Removing duplicate sublist
# using set() + sorted()
res = list(set(tuple(sorted(sub)) for sub in test_list))
 
# Printing result
print("The list after duplicate removal : " + str(res))


Output : 

The original list : [[1, 0, -1], [-1, 0, 1], [-1, 0, 1], [1, 2, 3], [3, 4, 1]]
The list after duplicate removal : [(-1, 0, 1), (1, 3, 4), (1, 2, 3)]

Time complexity: O(nmlog(m)), where n is the number of sublists and m is the length of each sublist. This is because we are using the sorted() function which takes O(m*log(m)) time to sort each sublist and we are using this function for each sublist.
Auxiliary space: O(nm), where n is the number of sublists and m is the length of each sublist. This is because we are creating a tuple of each sublist and storing it in a set, so the space required would be O(nm).

Method #2: Using set() + map() + sorted()

The task performed by the list comprehension in the above method can be modified using the map function using lambda functions to extend the logic to each and every sublist. 

Python3




# Python3 code to demonstrate
# removing duplicate sublist
# using set() + map() + sorted()
 
# Initializing list
test_list = [[1, 0, -1], [-1, 0, 1], [-1, 0, 1],
                        [1, 2, 3], [3, 4, 1]]
 
# Printing original list
print("The original list : " + str(test_list))
 
# Removing duplicate sublist
# using set() + map() + sorted()
res = list(set(map(lambda i: tuple(sorted(i)), test_list)))
 
# Printing result
print("The list after duplicate removal : " + str(res))


Output : 

The original list : [[1, 0, -1], [-1, 0, 1], [-1, 0, 1], [1, 2, 3], [3, 4, 1]]
The list after duplicate removal : [(-1, 0, 1), (1, 3, 4), (1, 2, 3)]

Time complexity: O(nmlog(m)), where n is the number of sublists and m is the length of each sublist. This is because we are using set() + map() + sorted() which has a time complexity of O(n*n) in the worst case.
Auxiliary space: O(nm), where n is the number of sublists and m is the length of each sublist. This is because we are creating a tuple of each sublist and storing it in a set, so the space required would be O(nm).

Method #3 : Using sorted() method ,in, not in operators

Python3




# Python3 code to demonstrate
# removing duplicate sublist
 
# Initializing list
test_list = [[1, 0, -1], [-1, 0, 1], [-1, 0, 1],
                        [1, 2, 3], [3, 4, 1]]
 
# Printing original list
print("The original list : " + str(test_list))
 
# Removing duplicate sublist
# Empty list
res1 = []
for i in test_list:
    x=sorted(i)
    res1.append(x)
res=[]
for i in res1:
    if tuple(i) not in res:
        res.append(tuple(i))
 
# Printing result
print("The list after duplicate removal : " + str(res))


Output

The original list : [[1, 0, -1], [-1, 0, 1], [-1, 0, 1], [1, 2, 3], [3, 4, 1]]
The list after duplicate removal : [(-1, 0, 1), (1, 2, 3), (1, 3, 4)]

Method #4: Using Numpy

This method makes use of the numpy library’s unique() function to remove duplicates from the nested list.

  1. We will be using a list comprehension to sort each sublist using the numpy.sort() function.
  2. Convert the resulting list of arrays to a numpy array using np.array()
  3. Using the np.unique() function to remove duplicates along the rows (axis=0).
  4. The resulting array is then converted back to a list using the tolist() method.

Example:

Python3




# Importing numpy module
import numpy as np
 
# Initializing nested list
test_list = [[1, 0, -1], [-1, 0, 1], [-1, 0, 1], [1, 2, 3], [3, 4, 1]]
 
# Printing original list
print("The original list : " + str(test_list))
 
# Removing duplicates
# using numpy
res = np.unique(np.array([np.sort(sub) for sub in test_list]), axis=0)
 
# Printing result
print("The list after duplicate removal : " + str(res.tolist()))


Output:

The original list : [[1, 0, -1], [-1, 0, 1], [-1, 0, 1], [1, 2, 3], [3, 4, 1]]
The list after duplicate removal : [[-1, 0, 1], [1, 2, 3], [1, 3, 4]]

Time Complexity: O(nmlog(m)), where n is the number of sublists and m is the length of each sublist. This is because we are first converting each sublist to a tuple using a list comprehension which takes O(nm) time, and then passing the resulting list of tuples to the unique() function. The unique() function sorts the array of tuples, which takes O(nm*log(m)) time.

Auxiliary Space: O(nm), where n is the number of sublists and m is the length of each sublist. This is because we are first converting each sublist to a tuple using a list comprehension which takes O(nm) space, and then passing the resulting list of tuples to the unique() function.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads