Algorithms For ML
Algorithms For ML
1. For the input, make the following assumptions: Training data T consisting of feature vectors and
corresponding class labels, max_depth denotes the maximum depth of the tree, min_samples_split
denotes the minimum number of samples required to split an internal node, and min_samples_leaf.
denotes minimum number of samples required to be at a leaf node.
2. Create a Decision Tree using the training data T with the specified hyperparameters (max_depth,
min_samples_split, and min_samples_leaf).
3. Generate a Decision Tree model D.
4. Train the Decision Tree model D:
a. Start by considering the entire training data T at the root node.
b. At each internal node of the tree, evaluate potential splits for each feature by computing a splitting
c criterion, e.g., Gini impurity or information gain.
c. Select the feature and split point that minimizes the splitting criterion.
d. Create child nodes for the selected feature and split point.
e. Recursively repeat steps b to d for each child node until one of the stopping conditions is met:
f. Assign the class label to each leaf node based on the majority class of the samples in that leaf node.
5. Return the trained Decision Tree model D.
1. Make the following assumptions: Training data T consisting of feature vectors and corresponding class
labels, and a set of class labels C.
2. Calculate the prior probabilities P(C_i) for each class C_i in C.
3. Count the number of training examples in T belonging to each class C_i.
4. Divide the count by the total number of training examples to compute P(C_i).
5. For each feature f_j in the feature vector:
a. Calculate the class-conditional probability P(f_j | C_i) for each class C_i:
b. Count the number of training examples in class C_i where the feature f_j occurs.
c. Divide the count by the total number of training examples in class C_i to compute
P(f_j | C_i).
6. Store the calculated prior probabilities P(C_i) and class-conditional probabilities P(f_j | C_i).
7. For a given input feature vector x:
a. For each class C_i in C:
b. Calculate the posterior probability P(C_i | x) using Bayes' theorem:
c. P(C_i | x) = (P(C_i) * ∏[P(f_j | C_i)]) / P(x)
8. Store the posterior probability for each class.
9. Assign the class label C_i with the highest posterior probability as the predicted class for the input
feature vector x.
10. Return the trained Naive Bayes classifier, which includes the prior probabilities P(C_i) and class-
conditional probabilities P(f_j | C_i) for each class C_i in C.
1. For the input make the following assumptions: Training data T (consisting of input features and
corresponding target labels), number of hidden layers L, number of neurons per hidden layer N,
learning rate α, and number of epochs E.
2. Initialize the weights and biases for each layer randomly
for epoch in 1 to E do:
for each data point (x, y) in T do:
3. Set the input layer values as the features of data point x
for l in 1 to L do
4. Compute the weighted sum and apply activation function for layer l. Pass the activation to the next l.
5. Compute the error between the predicted output and actual target y.
6. Update the weights and biases for each layer using gradient descent:
for l in L to 1 do.
11. Compute the gradient of the loss with respect to the weights and biases for layer l.
12. Update the weights and biases for layer l using the gradient and learning rate
13. end for
14. end for
15. Return the trained neural network model
3. Return the trained Random Forest classifier, which is an ensemble of the decision trees in the `forest` list.
When making predictions, each tree in the forest votes on the class label, and the majority class label is assigned
as the final prediction.
w=∑ α i xi y i
i
1
b= ∑ ¿ ¿ ¿
|S| iϵS
where S is the set of support
6. Given a new input sample x , predict the class label:
T
y pred =sign(w x +b)