Round 1: The position was for a Big Data Engineer experienced with ETL & Big data tools. First round is where you are tested on how good your basic SQL knowledge is. Further, questions move on to Hive, Apache SPARK and HDFS.
Round 2: Once they feel you are good at the basics then you will move on to more complex SQL queries and logics where your Analytical SQL writing ability is tested. Then you move on to scenario based questions on SPARK optimization techniques and Hive query tuning.
Please ensure that your basics are up to date as that is what the interviewer is looking out for.