Round 1: Online test on cocubes platform comprising of two sections.
- Aptitude (30 questions in 30 minutes.): Again, it had two subsections, logical, comprising 20 questions and 10 quantitative, all being paragraph based.
- Coding Section: Two very easy questions, 30 minutes.Everyone got different sets. Some of the questions were:
- Sum of all the digits of a number until the sum is of one digit.(https://www.geeksforgeeks.org/finding-sum-of-digits-of-a-number-until-sum-becomes-single-digit/)
- Given a number n, find the number of cards required to make a card pyramid of level n.
- Simple password checker with given constraints like at least one capital, at least two numbers and a special character, etc.
- Rotation of a linked list in groups of k. (https://www.geeksforgeeks.org/rotate-linked-list-block-wise/) and some other simple questions.
Round 2: Problem solving pen and paper coding test, 35 people were shortlisted for this pen-paper-code round. People were divided into groups of nine and were being called group-by-group. Every group was given a question that was completely open-ended, and we had to write code in any language of our choice. Each individual was given 45 minutes. the four questions were:
- Problem Statement: Given a graph implementation like, for eg, Facebook, find the minimum degree of separation between two given people. It is also given that the graph is implemented by using linked lists. The people who could see that the question demanded minimum path between two given nodes in a graph represented through adjacency list solved the problem by applying Djikstra algorithm after taking any of the given nodes as source.
- Problem Statement: Find trending words on Twitter. Again, a very blunt, open-ended question that didn’t define much. They wanted to test the analytical, coding skills as well as the thought process of the students. They were expecting features like, frequency-based sorting of the words giving priority to the timestamps(that’s what trending is, right!), remove the special characters like #, $, @, etc, do not count the common words like is, am, are, the prepositions and all. Two or more similar words, for example, #metooo and #meeeeeetoooouuuuuu.. be considered the same and many things else.
- Problem Statement: Maximise profits in stock given you can buy and sell stocks given k times. https://www.geeksforgeeks.org/maximum-profit-by-buying-and-selling-a-share-at-most-k-times/
- Problem Statement: Krithika has got a new prime subscription and have watched k movies. Help the prime people in suggesting her the (k+1)th movie. Basically, design a personalized recommendation engine. (w/o using ML libraries obviously! They wanted a C/C++/Java/Python code) This is the question I was asked. Again, a very open-ended question. They had come with plans to grill us and check our thought process. How I approached this problem was, I took both the conditions to filter out the remaining movies which are:
- 1. Item-based filtering: You have watched this movie, so you are more likely to watch movies of this type. For this, I had created a structure in C++ called movies, which contained different components like Genre, Rating, Studio, an array of Actors, release date, and an array of struct(Persons) which contained the list of people who had watched the movie.
- 2. User-based Filtering: People who have watched this movie have also watched these movies. For the implementation of this feature, I created a structure person which contained a name, age, nationality, profession and an array of struct movie that he has watched. Now, the process was, for each of the movies she has watched, I was comparing it first with all the features of every movie present in the database which is nothing but right now, an array of movies, and generating a score according to an arbitrarily assigned priority right now. For the user-based filtering, for each of the movie that she has watched, there must be an array of persons who have also watched that movie, so again, traverse through that array and then inside that array, there must be an array of movies he would have watched, compare it with those and then generate the final maximum score from all of these. Finally, from all the scores generated the movie with the highest score is recommended.
Round 3: Technical Interview, after writing the code for the last round and submitting it, everyone was called one by one and were being asked to explain the code and then the discussion about the projects done and some SQL queries. In my case, there were three people together whom I had to explain my logic first, then, they started questioning.
- You have assigned the priority arbitrarily to generate the score. What if this priority is wrong for the person? If the recommendation is wrong, the person will obviously not watch the movie or leave it in between. So, let’s take a time-limit, and if until then, the movie is not watched, some other feature will be given priority and some other movie will be recommended and the different features are put into a circular queue. Apart from this, all the recommended and not watched movies should be kept in an array of options and keep recommending them from time to time. Like giving them rest for now, and then comparing again, with the max score movies to go for recommendation.
- Although you have covered many great features, to do so, the time complexity is O(n3), and for such a large database, dynamically calculating and showing this will impossible. How are you going to improve this? Well, for comparisons, I can partition the movies in different buckets so that the exhaustivity can be reduced, and put them in different sets which work in logn for searching. That’s how for two-level of loops are converted to logn, thereby reducing the time complexity to n(logn)2.
After this, I was asked about the projects I had done, one of them was a networking and ML specialist and my project was Detection of DDoS (HTTP get and post-flooding, DNS reflection, and amplification) attacks using ML. So, in both areas, I was eaten up raw.
Round 4: This was HR Interview. If you have reached this far because the last round was a grilling elimination round, its highly likely that you will succeed. So, I was asked the simple HR questions like why should we hire you. tell me something you have done and is not in the resume(I had plethora)., preferential location, number of siblings, etc Then, any questions? (Tip: Have some questions for them!)
Finally, the result came, I was selected along with 8 other people I’d like to thank everyone who helped me, especially GeeksforGeeks and InterviewBit.
- InfoEdge Interview Experience | OnCampus-2019
- Optum-UHG Interview Experience for Internship (On-Campus 2019)
- UHG(United Health Group) Interview Experience
- UHG(United Health Group) Interview Experience | Set 2 (On-Campus)
- UHG(United Health Group) Interview Experience | Set 3 (On-Campus)
- UHG(United Health Group) Interview Experience | Set 4 (On-Campus)
- UHG(United Health Group) Interview Experience | Set 4 (On-Campus for Internship)
- UHG(United Health Groups) Interview Experience | Set 5 (On-Campus)
- UHG(United Health Group) Interview Experience (Internship)
- UHG (Optum) Interview Experience for SDE-1 | On-Campus
- MathWorks Interview Experience (EDG, Oncampus)
- ServiceNow Interview Experience Oncampus
- Optum (UHG) Interview Experience for Internship
- Optum(UHG) Interview Experience for Internship | On-Campus 2020(Virtual)
- Optum(UHG) Interview Experience for Internship
- Interview Experience at Naggaro 2019 (3 yrs of Experience)
- PwC Interview Experience for Cyber Security | On-Campus 2019
- Optum Interview Experience(2019)
- MAQ Software Interview Experience (Aug 2019)
- Oracle Interview Experience for Software Developer 2019
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.