top of page

Decision Trees: Top Interview Questions



Welcome to the third installment of our data science and machine learning interview series! In this blog post, we'll dive into decision trees, a popular method used in supervised learning for classification and regression tasks. Decision Trees are one of the most popular and widely used algorithms in the field of Machine Learning. They are simple, easy to understand and interpret, and can be used for both classification and regression tasks. They are also a fundamental component of many ensemble methods like Random Forests and Boosting.


Due to their popularity, decision trees are a common topic in data science and machine learning interviews. In this blog, we will cover the top interview questions on decision trees that you should be prepared to answer if you want to ace your next interview. We will discuss various topics such as how decision trees work, the advantages and disadvantages of using decision trees, how to handle categorical variables, and the different types of decision tree algorithms like ID3, C4.5, and CART. These questions will help you to refresh your knowledge of decision trees and give you a better understanding of the subject.


So, without further ado, let's delve into the top decision tree interview questions!



Q1. What is a decision tree and how does it work?

A decision tree is a graphical representation of possible solutions to a decision based on certain conditions. It is a flowchart-like structure, where each internal node represents a "test" on an attribute (e.g. whether a coin flip comes up heads or tails), each branch represents the outcome of the test, and each leaf node represents a class label (decision taken after computing all attributes). It works by starting at the tree root and filtering down through the branches, based on the attribute test results, until a leaf node is reached. The class label of the leaf node is the decision that is made.


It is used for both classification and regression problems. The decision tree algorithm creates a tree and splits the data into subsets while at the same time an associated decision tree is incrementally developed. The final result is a tree with decision nodes and leaf nodes. Each internal node of the tree corresponds to an attribute, and each leaf node corresponds to a class label.

Q2. What is the main advantage of using decision trees?

Advantages of using decision trees include:

  • Easy to understand and interpret: Decision trees are easy to understand and interpret as they mimic the way humans take decisions.

  • Handling missing values: Decision trees are able to handle missing values and maintain high accuracy.

  • Handling categorical variables: Decision trees can handle both categorical and numerical variables, making them a versatile algorithm.

  • Handling non-linear relationships: Decision trees are able to capture non-linear relationships in the data.

  • Fast training time: Decision trees are relatively fast to train, making them suitable for large datasets.

Disadvantages of using decision trees include:

  • Overfitting: Decision trees can easily overfit the data, especially when the tree is deep and has many branches.

  • Instability: Decision trees can be sensitive to small changes in the data, leading to instability in the tree structure.

  • Bias towards high cardinality categorical features: Decision trees may give more importance to features with more categories, leading to bias in the model.

  • Poor generalization: Decision trees can have poor generalization performance as they may not capture the underlying patterns in the data.

  • Prone to high variance: Decision trees are prone to high variance, leading to overfitting in the model.



Q3. How can you handle missing values in the decision tree algorithm?

Missing values can be handled in a decision tree algorithm by using techniques such as imputation, where missing values are replaced with a specific value such as the mean or median of the feature. Another approach is to use surrogate splits, where a new variable is created to indicate the presence of missing values and used as a split point in the decision tree.


Q4. Can you explain the difference between pre-pruning and post-pruning in the decision tree algorithm?

Pre-pruning is a technique where the decision tree is stopped from growing before it becomes fully grown by setting a limit on the maximum depth or number of leaf nodes. Post-pruning is a technique where the decision tree is grown to its full size and then pruned by removing branches that do not contribute to the overall accuracy of the tree.



Q5. How can you implement cost-complexity pruning in a decision tree algorithm?


Cost-complexity pruning can be implemented in a decision tree algorithm by introducing a complexity parameter, denoted by "alpha", which controls the trade-off between the accuracy of the tree and the number of its leaves. The algorithm starts with a fully grown tree and then prunes the branches by comparing the accuracy of the tree with the complexity parameter. The branches that do not improve the accuracy of the tree by more than the complexity parameter are pruned. This process is repeated until the optimal tree is obtained.



Q6. How do you prevent overfitting in a decision tree?


Overfitting in a decision tree can be prevented by using techniques such as pruning (removing sub-trees that do not contribute to the overall accuracy of the tree), setting a minimum number of samples required at a leaf node, or using a random forest algorithm, which combines multiple decision trees to reduce overfitting.



Q7. How does a decision tree handle continuous variables and categorical variables?

Continuous variables are handled by creating a threshold value at each internal node, and then splitting the data into two groups: those that are greater than or equal to the threshold, and those that are less than the threshold. The algorithm then determines the best threshold by calculating the information gain for each possible threshold.


Categorical variables are handled by creating a separate branch for each category at each internal node. The algorithm then determines the best-split point by calculating the information gain for each category.



Q8. what are the problems with decision trees that are solved by random forests?


One of the main problems with decision trees is that they can easily overfit the training data. This means that the tree becomes too complex and captures the noise or random variations in the training data, resulting in poor performance on new unseen data.


Random forests are an ensemble method that combines multiple decision trees to reduce overfitting and improve the overall accuracy of the model. It works by averaging the predictions of many decision trees, which tends to make the predictions more robust and accurate. By using random sub-sampling of the data and a subset of features at each split in the decision tree, Random forest decorrelates the trees and makes the model more robust.



Q9. Explain gini impurity and information gain.


Gini impurity is a measure of the probability of misclassifying a randomly chosen element from the set, it is calculated as the sum of the squared probabilities of each class in a set. Gini impurity ranges between 0 and 1, where 0 represents a pure set and 1 represents an impure set. Gini impurity is calculated as the sum of the squared probabilities of each class in a set. It is defined as:

Gini = 1 - (p1^2 + p2^2 + ... + pn^2)

Where p_i is the probability of class i in the set, and n is the number of classes. Gini impurity ranges between 0 and 1, where 0 represents a pure set and 1 represents an impure set. A pure set is one where all the elements belong to the same class, and an impure set is one where the elements belong to multiple class

Information gain is a measure of the reduction in the entropy of the target variable after the split, it is calculated as the difference between the entropy before the split and the weighted average of the entropies after the split. Information gain ranges between 0 and the initial entropy, where 0 represents no reduction in entropy and the initial entropy represents a total reduction in entropy. The entropy of a set is defined as:


$Entropy = -\sum_{i=1}^{n} p_i log_2(p_i)$

Where $p_i$ is the probability of class i in the set, and n is the number of classes. The information gain is then calculated as:

Information Gain = Entropy(parent) - ( (n1/n) * Entropy(child1) + (n2/n) * Entropy(child2) + ... + (nc/n) * Entropy(child_c) )

Where c is the number of children, n is the total number of samples, $n_i$ is the number of samples in child i and Entropy(parent) is the entropy of the parent node before the split, Entropy(child_i) is the entropy of child i after the split.



Q10. How does a decision tree handle numerical features?


Decision trees handle numerical features in several ways, the most common ways are:

  • Binning: This method involves converting a continuous variable into a categorical variable by dividing the range of the variable into a fixed number of bins. Each value of the variable is then assigned to a bin, and the variable is treated as a categorical variable in the decision tree algorithm.

  • Thresholding: This method is similar to binning, but instead of dividing the range of the variable into bins, a threshold value is chosen and the variable is split into two categories based on whether the variable's value is above or below the threshold. This creates a binary split.


Q11. what is the impact of outliers on decision trees?


Outliers generally do not have a significant impact on the performance of decision trees. Decision trees are a type of algorithm that is not sensitive to the presence of outliers because the splits in the tree are determined by the majority of the data points in each partition rather than individual data points. This means that even if a few data points are outliers, they are unlikely to affect the overall partitioning of the data.

Also, decision trees are based on recursive partitioning of the feature space, the outlier points will likely fall in a leaf node by itself or with a few other points. This means that the outlier point will not have a strong effect on the splits in the tree.

However, it's important to note that outliers can affect the performance of decision tree based models if the tree is built on a small dataset and the outlier takes up a large portion of the dataset.



Q12. Can outliers impact the performance of regression decision trees if outliers are in the target variable?


Outliers in the target variable can have a significant impact on the performance of regression decision trees. Regression decision trees are used to predict a continuous target variable, and outliers in the target variable can skew the predictions made by the tree.

When a decision tree is built, it splits the data into partitions based on the values of the input features and then it fits a constant value for each partition. This constant value is the average of the target variable in that partition. If there is an outlier in the target variable, it will affect the average and thus the constant value will be skewed by that outlier, which can lead to poor predictions.

To mitigate the impact of outliers in the target variable, one can use robust regression techniques such as the use of a robust loss function or robust optimization algorithms such as RANSAC or Theil-Sen estimator.

Additionally, it's important to keep in mind that outliers should be identified and handled appropriately before building the model, by either removing or transforming them.



Q13. what is surrogate split in decision trees?


A surrogate split in decision trees refers to a split that is created on a variable that is not the primary variable used to make the decision, but rather a variable that is highly correlated with the primary variable. Surrogate splits are used to improve the performance of decision tree algorithms and make them more efficient.

The main idea behind surrogate splits is that, when a variable is highly correlated with the primary variable, it can be used as a proxy to split the data. This can be useful in situations where the primary variable is computationally expensive or difficult to work with, such as when the variable is a continuous variable or a high-dimensional variable.

In practice, surrogate splits are created by finding the variable that is most correlated with the primary variable and then using that variable to make the split. This can be done by calculating the correlation between all variables and the primary variable, and selecting the variable with the highest correlation.

Surrogate splits can be implemented in decision tree algorithms, such as CART and C4.5. They can improve the performance of the algorithm by reducing the computation time and increasing the accuracy of the model. However, it also increases the complexity of the algorithm as it needs to evaluate the correlation between variables and select the best one to use as a surrogate.



Q14. What are the differences between ID3, C4.5 and CART?


ID3, C4.5, and CART are all decision tree algorithms that are used for classification tasks. They all use a top-down, greedy approach to build the tree by recursively splitting the data based on the variable that provides the most information gain. However, there are some key differences between these algorithms:

  • ID3 (Iterative Dichotomizer 3) was developed by Ross Quinlan in 1986. It uses the information gain as the splitting criterion, and it can only handle categorical variables. It also assumes that all the attributes are independent of each other.

  • C4.5 is an extension of ID3 that was developed by Ross Quinlan in 1993. It also uses the information gain as the splitting criterion, but it can handle both categorical and continuous variables. It also uses the information gain ratio, which is a variation of the information gain that takes into account the number of branches created by the split. C4.5 also uses pruning to remove branches that do not improve the accuracy of the tree.

  • CART (Classification and Regression Trees) is another decision tree algorithm that was introduced by Leo Breiman, Jerome Friedman, Richard Olshen, and Charles Stone in 1984. It uses the Gini impurity as the splitting criterion, which measures the probability of a random sample from the dataset being classified incorrectly. It can handle both categorical and continuous variables, but unlike C4.5, it creates only binary splits, meaning it can only split the data into two subsets.

In summary, ID3, C4.5, and CART are all decision tree algorithms used for classification tasks. ID3 uses the information gain and can only handle categorical variables and assumes independence between attributes. C4.5 is an extension of ID3 that can handle both categorical and continuous variables, uses information gain ratio and uses pruning to improve the accuracy of the tree. CART uses Gini impurity and creates only binary splits.



Q15. Explain and write pseudo code for ID3 algorithm.


ID3 (Iterative Dichotomizer 3) is a decision tree algorithm developed by Ross Quinlan in 1986. It uses the information gain as the splitting criterion, and it can only handle categorical variables. It also assumes that all the attributes are independent of each other.

The algorithm starts with the entire dataset and recursively splits the dataset into subsets based on the attribute that provides the most information gain. At each node of the tree, a decision is made based on the attribute with the highest information gain. The process is repeated until the stopping criterion is met, such as the maximum depth of the tree, the minimum number of samples in a leaf, or the minimum information gain.

The algorithm creates a leaf node when all the instances in a node belong to the same class or when the stopping criterion is met.

The pseudo-code for the ID3 algorithm is as follows:

function ID3(data, target, features):
    if stopping_criterion is met:
        return leaf_node
    else:
        best_feature = select_feature(data, target, features)
        tree = new_decision_node(best_feature)
        for each value of best_feature:
            subset = split_data(data, best_feature, value)
            child = ID3(subset, target, features)
            add_child(tree, value, child)
    return tree

Where "data" is the training data, "target" is the target variable and "features" is the set of features used for training. The "select_feature" function selects the feature that provides the most information gain. "split_data" function splits the data based on the selected feature and "add_child"



Q16. Explain the time and space complexity of training and testing in the case of a Decision Tree.


The time complexity of training a decision tree is O(nm log(n)), where n is the number of samples and m is the number of features. This is because for each split, the algorithm needs to sort the data at each split, which takes O(n log(n)) time. Since we need to do a split for all the m features, the overall complexity is O(m) * O(n log(n)) which is O(nm log(n)).


The space complexity of training a decision tree is O(n) in the worst case, in which every sample belongs to a different class. This is because in the worst case, the tree will have n-1 internal nodes, which need to be stored in memory.


The time complexity of testing a decision tree is O(log(n)), where n is the number of samples. This is because in the worst case, we need to traverse the entire tree to make a prediction, which takes O(log(n)) time, as the tree is balanced.

The space complexity of testing a decision tree is O(n), where n is the number of samples. This is because we need to store the tree in memory in order to make predictions on new samples.


In conclusion, decision trees are a powerful tool for both classification and regression problems, and understanding the underlying concepts and algorithms can give you a significant advantage in your machine learning and data science interviews. We've covered some of the most common interview questions related to decision trees, including the Gini index, entropy, and information gain. We hope that this blog has helped you to become more familiar with these concepts and that you feel better prepared to tackle any decision tree-related questions that come your way. Remember that decision trees are widely used in industry and understanding how they work and how they can be applied to solve problems is a valuable skill.


29 views0 comments

Recent Posts

See All

Statistics & Linear Algebra- Top Interview Questions

1. What is co-variance matrix- A covariance matrix is a square matrix that describes the covariance between multiple variables. It is commonly used in statistics and probability theory to describe the

bottom of page