FeedbackArticles

# TREES

In computer science, a tree is a non-linear data structure that consists of nodes connected by edges. A tree has a root node, which is the topmost node in the hierarchy, and it can have any number of child nodes. Each node in the tree may have its own set of child nodes, creating a hierarchical structure that resembles a tree.

In a tree, the nodes are connected by edges, which represent the relationship between the nodes. The edges are directed, meaning that they point from parent nodes to child nodes. The edges also represent the path between nodes, which can be used to traverse the tree.

Trees are used to represent hierarchical relationships in data, such as the organizational structure of a company, the classification of species in biology, or the structure of a file system on a computer. They can be used to perform various operations, such as searching for a particular node, inserting or deleting nodes, and traversing the tree to perform some action on each node.

## BINARY TREES

A binary tree is a tree-like data structure where each node can have at most two children, referred to as the left child and the right child. A binary tree is composed of nodes, where each node contains a value and a reference to its left and right child nodes.

The topmost node in a binary tree is called the root, and each node that does not have any children is called a leaf. The maximum number of nodes at any level in a binary tree is 2^h, where h is the height of the tree.

Binary trees can be used to solve a variety of problems, such as searching, sorting, and parsing. One common type of binary tree is the binary search tree, which is a binary tree where the value of each node is greater than or equal to the value of its left child and less than or equal to the value of its right child.

Here is an example implementation of a binary tree node in Java:

``` public class TreeNode {     int val;     TreeNode left;     TreeNode right;     public TreeNode(int val) {         this.val = val;     } } ```

### PATTERNS

#### LEVEL ORDER TRAVERSAL

The Level Order Traversal pattern is used to traverse a binary tree in a breadth-first manner, i.e., level by level. It can be used to solve many problems such as finding the minimum depth of a tree, finding the maximum depth of a tree, finding the level order traversal of a tree, etc.

The algorithm for level order traversal is as follows:

1. Create a queue and add the root of the binary tree to it.
2. While the queue is not empty, do the following:
1. Dequeue the next node from the queue.
2. Process the node.
3. Enqueue the left child of the node (if it exists).
4. Enqueue the right child of the node (if it exists).

Here is an example implementation of the Level Order Traversal pattern in Java:

``` public List<List<Integer>> levelOrder(TreeNode root) {     List<List<Integer>> result = new ArrayList<>();     if (root == null) {         return result;     }     Queue<TreeNode> queue = new LinkedList<>();     queue.offer(root);     while (!queue.isEmpty()) {         int levelSize = queue.size();         List<Integer> currentLevel = new ArrayList<>(levelSize);         for (int i = 0; i < levelSize; i++) {             TreeNode currentNode = queue.poll();             currentLevel.add(currentNode.val);             if (currentNode.left != null) {                 queue.offer(currentNode.left);             }             if (currentNode.right != null) {                 queue.offer(currentNode.right);             }         }         result.add(currentLevel);     }     return result; } ```

This implementation uses a queue to keep track of the nodes that need to be processed. We start by adding the root of the binary tree to the queue, and then we loop through the queue until it is empty. For each node that we dequeue from the queue, we process the node by adding its value to the current level list. We then enqueue its left and right children (if they exist) so that they can be processed in the next level. Finally, we add the current level list to the result list, and then continue with the next level.

#### IN-ORDER TRAVERSAL WITH PREVIOUS NODE

The in-order traversal with the previous node is a pattern that involves traversing a binary tree in-order while keeping track of the previous node that was processed. It is often used when we need to compare adjacent nodes in a binary search tree or find the kth smallest element in the tree.

The basic idea is to recursively traverse the left subtree, process the current node, and then recursively traverse the right subtree. While processing a node, we compare it with the previous node to perform some operation. For example, we can find the difference between adjacent nodes, check if the tree is a binary search tree, or find the kth smallest element.

Here's an implementation of this pattern in Java:

``` class TreeNode {     int val;     TreeNode left;     TreeNode right;     public TreeNode(int val) {         this.val = val;     } } public class InorderTraversalWithPrevNode {     private TreeNode prev;     public void inorder(TreeNode root) {         if (root == null) {             return;         }         inorder(root.left);         if (prev != null) {             // Compare the current node with the previous node and do some operation             System.out.println(root.val - prev.val);         }         prev = root;         inorder(root.right);     } } ```

In this example, the inorder method takes the root of a binary tree as input and traverses the tree in-order while keeping track of the previous node in the prev variable. When processing a node, it compares it with the previous node (if it exists) and performs some operation. In this case, it prints the difference between adjacent nodes.

#### PRE ORDER TRAVERSAL

The pre-order traversal is a pattern used to visit all nodes of a binary tree. This traversal algorithm starts by visiting the root node, then recursively visiting the left subtree, and finally the right subtree.

Here is an implementation of pre-order traversal in Java:

``` public class Node {     int val;     Node left;     Node right;         Node(int val) {         this.val = val;         this.left = null;         this.right = null;     } } public void preOrderTraversal(Node root) {     if (root == null) {         return;     }         System.out.println(root.val);  // visit the root node     preOrderTraversal(root.left);  // recursively visit the left subtree     preOrderTraversal(root.right);  // recursively visit the right subtree } ```

In this implementation, we define a Node class to represent each node in the binary tree. The preOrderTraversal method takes the root node as an argument and recursively visits each node in pre-order. We first check if the root node is null, and if so, we simply return. If the root node is not null, we visit the root node by printing its value to the console, and then recursively visit the left subtree and the right subtree.

#### POST ORDER TRAVERSAL

The post-order traversal pattern is a popular tree traversal pattern that is often used to solve tree problems. In this pattern, the left and right child nodes of a node are visited before the node itself is visited. The algorithm continues traversing the tree until it has visited all of the nodes in the tree.

The post-order traversal pattern is commonly used in tree problems that require processing the nodes of the tree in a bottom-up manner. One example of a problem that can be solved using the post-order traversal pattern is to compute the height of a binary tree.

Here is an example implementation of the post-order traversal pattern in Java:

``` class TreeNode {     int val;     TreeNode left;     TreeNode right;     TreeNode(int x) { val = x; } } public void postOrderTraversal(TreeNode node) {     if (node == null) {         return;     }     postOrderTraversal(node.left);     postOrderTraversal(node.right);     System.out.print(node.val + " "); } ```

This implementation performs a post-order traversal of a binary tree by recursively visiting the left and right child nodes of each node before visiting the node itself. The System.out.print(node.val + " ") statement prints the value of the node to the console.

#### MORRIS TRAVERSAL

The Morris traversal pattern is a space-optimized way to traverse a binary tree without using additional space, other than the output list. It allows us to visit the nodes of the tree in the inorder, preorder or postorder sequence, without using a stack or a recursive call.

The idea behind the Morris traversal is to modify the structure of the tree during the traversal. We connect a thread from the rightmost node of the left subtree to its inorder successor, which allows us to come back to the current node after visiting its left subtree. This technique replaces the null right pointer of the current node, which is not being used, with the pointer to the inorder successor.

The Morris traversal is implemented in two steps:

1. Initialize current node as the root of the tree.
2. While current node is not null, repeat the following steps:
1. If the current node does not have a left child, add the current node to the output list and move to the right child.
2. If the current node has a left child, find the rightmost node of the left subtree and make it point to the current node, so we can come back to it. Move to the left child.
3. If the current node has a left child that already points to the current node, it means that we have visited the whole subtree. Revert the changes made to the tree, add the current node to the output list, and move to the right child.

Here is an example implementation of the Morris traversal in Java for the inorder sequence:

``` public List<Integer> inorderTraversal(TreeNode root) {     List<Integer> res = new ArrayList<>();     TreeNode cur = root;     while (cur != null) {         if (cur.left == null) {             res.add(cur.val);             cur = cur.right;         } else {             TreeNode prev = cur.left;             while (prev.right != null && prev.right != cur) {                 prev = prev.right;             }             if (prev.right == null) {                 prev.right = cur;                 cur = cur.left;             } else {                 prev.right = null;                 res.add(cur.val);                 cur = cur.right;             }         }     }     return res; } ```

This implementation has a time complexity of O(n) and a space complexity of O(1), because it uses only constant extra space.

#### PATH TRACING

The Path tracing pattern in binary trees is a technique that is used to traverse and search for nodes or paths in a binary tree. The pattern involves iterating through the binary tree in a recursive manner, building up the path as the traversal progresses.

The path tracing pattern can be used to find a specific path in a binary tree, or to collect data from the tree along a certain path. It can be used to solve a wide range of problems, from finding the maximum path sum in a binary tree to determining whether a binary tree is a valid binary search tree.

To implement the path tracing pattern, a recursive function is used to traverse the binary tree. The function takes in the current node being visited and a list that represents the current path being traversed. If the current node is a leaf node, the function can perform any necessary calculations or actions based on the path that has been built up.

If the current node is not a leaf node, the function can make a recursive call to visit the left child and right child of the current node. Before making the recursive call to the left child, the current node is added to the path that has been built up so far. After the recursive call to the left child is completed, the current node is removed from the path.

After the recursive call to the right child is completed, the current node is removed from the path as well. This process of adding and removing nodes from the path as the binary tree is traversed allows the path tracing pattern to be used to find specific paths or to collect data along a path.

Here is an example implementation of the path tracing pattern in Java:

``` public void findPath(TreeNode root, List<Integer> path) {     if (root == null) {         return;     }         // Add the current node to the path     path.add(root.val);         // If the current node is a leaf, do something with the path     if (root.left == null && root.right == null) {         // Do something with the path         System.out.println(path);     }         // Recursively visit the left and right children     findPath(root.left, path);     findPath(root.right, path);         // Remove the current node from the path     path.remove(path.size() - 1); } ```

In this example, the findPath function takes in a TreeNode object representing the current node being visited and a List of Integer objects representing the path that has been built up so far. The function first checks to see if the current node is null and returns if it is.

The function then adds the value of the current node to the path and checks if the current node is a leaf node. If it is, some action can be performed based on the path that has been built up. In this case, the path is printed to the console.

The function then makes a recursive call to visit the left and right children of the current node. Before making the recursive call to the left child, the current node is added to the path. After the recursive call to the left child is completed, the current node is removed from the path. The same process is repeated for the right child of the current node.

By using the path tracing pattern in a binary tree, it is possible to find specific paths or to collect data along a path.

#### BACKTRACKING

Backtracking is a general algorithm for finding all (or some) solutions to some computational problems, notably constraint satisfaction problems, such as crosswords, puzzles, and mathematical problems. It incrementally builds candidates to the solutions, and abandons a candidate ("backtracks") as soon as it determines that the candidate cannot possibly be completed to a valid solution.

In the context of binary trees, backtracking is often used to find paths in the tree that meet certain criteria. For example, if we want to find all paths from the root of a binary tree to the leaves that have a certain sum, we can use backtracking to try out different paths and backtrack when we find one that does not meet the criteria.

The basic idea of backtracking is to use recursion to explore all possible paths through the tree. At each node, we check whether it meets our criteria. If it does, we add it to the current path, and recursively explore its children. If it does not, we backtrack to the previous node and try another child. Once we have explored all possible paths through the tree, we have found all the solutions that meet our criteria.

Here is an example implementation of the backtracking pattern in Java for finding all paths from the root of a binary tree to the leaves that have a certain sum:

``` public List<List<Integer>> findPaths(TreeNode root, int targetSum) {     List<List<Integer>> result = new ArrayList<>();     List<Integer> path = new ArrayList<>();     findPathsHelper(root, targetSum, path, result);     return result; } private void findPathsHelper(TreeNode node, int targetSum, List<Integer> path, List<List<Integer>> result) {     if (node == null) {         return;     }     path.add(node.val);     if (node.left == null && node.right == null && node.val == targetSum) {         result.add(new ArrayList<>(path));     } else {         findPathsHelper(node.left, targetSum - node.val, path, result);         findPathsHelper(node.right, targetSum - node.val, path, result);     }     path.remove(path.size() - 1); } ```

In this example, we use a helper function to do the actual backtracking. The findPaths() function sets up the initial state of the recursion, and then calls the helper function with the root node, an empty path, and an empty result list. The helper function checks whether the current node is null, and if not, adds its value to the current path. If the current node is a leaf node and has the target sum, we add the current path to the result list. Otherwise, we recursively explore the left and right children of the current node, subtracting the value of the current node from the target sum. Once we have explored all possible paths through the tree, we remove the last element from the current path and return to the previous node.

Easy

Medium

Hard

## GENERIC TREES

A generic tree is a tree-like data structure in which each node can have an arbitrary number of child nodes. In a generic tree, each node can have zero or more children, and there is no restriction on the number of children that a node can have.

Generic trees are often used to represent hierarchical data, such as the file system on a computer or the organization chart of a company. They can also be used to represent the structure of an HTML document or the nesting of tags in XML.

Here's an example implementation of a generic tree node in Java:

``` public class TreeNode<T> {     T data;     List<TreeNode<T>> children;     public TreeNode(T data) {         this.data = data;         children = new ArrayList<>();     }     public void addChild(TreeNode<T> child) {         children.add(child);     } } ```

This code defines a TreeNode class with a generic type parameter T, which represents the type of data that is stored in the node. The class has a data field that stores the node's data and a children field that is a list of child nodes.

The addChild() method is used to add a new child node to the current node's list of children. This implementation can be used to construct and manipulate generic trees in Java.

### PATTERNS

Same as for the binary tree but applied to more nodes.

Easy

Medium

Hard

## BINARY SEARCH TREES

A binary search tree (BST) is a binary tree where each node has a value, and for any node in the tree, all the values in its left subtree are less than the node's value, and all the values in its right subtree are greater than the node's value.

In a binary search tree, we can efficiently search for a value, insert a value, or delete a value, because the tree structure allows us to quickly narrow down the range of values that we need to search through. When searching for a value, we can start at the root node and compare the value we are searching for to the value at the current node. If the value is less than the current node, we continue the search in the left subtree, and if it is greater than the current node, we continue the search in the right subtree. We repeat this process until we find the value we are searching for, or we reach a null node, indicating that the value is not in the tree.

When inserting a value into a binary search tree, we first search for the correct position to insert the value. We start at the root node and compare the value we are inserting to the value at the current node. If the value is less than the current node, we continue the search in the left subtree, and if it is greater than the current node, we continue the search in the right subtree. We repeat this process until we reach a null node, at which point we create a new node with the value we are inserting and attach it to the parent node.

When deleting a value from a binary search tree, we first search for the node containing the value we want to delete. We then consider three cases:

• If the node has no children, we simply remove it from the tree.
• If the node has one child, we replace it with its child.
• If the node has two children, we find the successor (i.e., the smallest node in the right subtree) or the predecessor (i.e., the largest node in the left subtree) of the node we want to delete. We then replace the node we want to delete with the successor or predecessor, and recursively delete the successor or predecessor.

Binary search trees are commonly used in computer science because of their efficient search, insert, and delete operations. However, their performance can degrade if the tree becomes unbalanced, with one subtree much larger than the other. To avoid this, various balancing techniques have been developed, such as AVL trees, red-black trees, and B-trees.

Here is an example implementation of a binary search tree in Java:

``` public class BinarySearchTree<T extends Comparable<T>> {         private Node root;         private class Node {         T value;         Node left;         Node right;                 public Node(T value) {             this.value = value;         }     }         public void insert(T value) {         root = insert(root, value);     }         private Node insert(Node node, T value) {         if (node == null) {             return new Node(value);         }                 if (value.compareTo(node.value) < 0) {             node.left = insert(node.left, value);         } else if (value.compareTo(node.value) > 0) {             node.right = insert(node.right, value);         }                 return node;     }         public boolean search(T value) {         return search(root, value);     }         private boolean search(Node node, T value) {         if (node == null) {             return false;         }                 if (value.compareTo(node.value) == 0) {             return true;         } else if (value.compareTo(node.value) < 0) {             return search(node.left, value);         } else {             return search(node.right, value);         }     }         public void delete(T value) {         root = delete(root, value);     }         private Node delete(Node node, T value) {         if (node == null) {             return null;         }                 if (value.compareTo(node.value) < 0) {             node.left = delete(node.left, value);         } else if (value.compareTo(node.value) > 0) {             node.right = delete(node.right, value);         } else {             if (node.left == null) {                 return node.right;             } else if (node.right == null) {                 return node.left;             }                         Node minNode = findMinNode(node.right);             node.value = minNode.value;             node.right = delete(node.right, minNode.value);         }                 return node;     }         private Node findMinNode(Node node) {         while (node.left != null) {             node = node.left;         }                 return node;     }     } ```

In this implementation, the BinarySearchTree class is a generic class that can store any type of object that implements the Comparable interface, which allows the tree to compare objects and maintain its order.

The tree is composed of Node objects, each of which has a value and left and right child nodes. The insert method inserts a new value into the tree in its correct position based on its comparison to the values of the existing nodes. The search method returns true if the value is found in the tree and false otherwise. The delete method removes a value from the tree and rearranges its child nodes to maintain the tree's order. The findMinNode method is used to find the minimum node in the right subtree when deleting a node with two children.

Note that this implementation is not self-balancing, so in some cases it may become unbalanced and lose its optimal performance.

### PATTERNS

Same as for the binary tree.

Easy

Medium

Hard

## BALANCED BINARY SEARCH TREES

Balanced search trees, also known as self-balancing binary search trees, are a special type of binary search tree where the height of the tree is always maintained to be close to log(n), where n is the number of nodes in the tree. This property ensures that the worst-case time complexity of most of the operations on the tree remains O(log n).

The most popular balanced search trees are:

AVL Trees: AVL trees were the first balanced binary search trees to be introduced. The balance factor of a node in an AVL tree is the difference between the height of its left subtree and the height of its right subtree. An AVL tree is a height-balanced binary search tree, where the balance factor of each node is either -1, 0, or 1. If the balance factor of a node is outside of this range, then the tree is rebalanced using one of the AVL rotations.

Red-Black Trees: Red-black trees are binary search trees with one extra bit of storage per node, which is used to ensure that the tree remains approximately balanced. Red-black trees guarantee that no path in the tree is more than twice as long as any other path, so the height of the tree is always O(log n). In a red-black tree, each node is either red or black, and certain rules are followed to ensure that the tree is balanced.

B-Trees: B-trees are a type of self-balancing search tree, where each node can have multiple keys and children. B-trees are commonly used in databases and file systems, as they allow efficient searching and inserting of large amounts of data. The balance of a B-tree is achieved by controlling the number of keys in each node, and by keeping the tree height as small as possible.

Balanced search trees are very useful in situations where a lot of searching, inserting, or deleting of data needs to be done, as their worst-case time complexity is always O(log n).

### AVL TREES

AVL trees are self-balancing binary search trees, named after their inventors Adelson-Velsky and Landis. An AVL tree maintains a balance condition such that the height of the left and right subtrees of any node differ by at most one.

The balance factor of a node in an AVL tree is defined as the difference between the height of its left and right subtrees. The balance factor of any node in an AVL tree can be -1, 0, or 1. If the balance factor of a node is outside this range, then the tree is unbalanced and it needs to be balanced by applying rotations.

AVL tree rotations are of four types: left-rotation, right-rotation, left-right rotation, and right-left rotation. These rotations maintain the balance condition of the AVL tree. The balance factor of a node is updated after each rotation. The rotation is chosen based on the balance factor of the node and its children.

Insertion and deletion operations in AVL trees can be performed in O(log n) time, where n is the number of nodes in the tree. After an insertion or deletion operation, the balance of the tree may be disturbed. To restore the balance of the AVL tree, one or more rotations may be performed.

AVL trees are widely used for data storage and retrieval in computer science. They are used in database management systems, compilers, computer algebra systems, and in many other applications where efficient searching, insertion, and deletion operations are required.

Here is an implementation of an AVL tree in Java:

``` class AVLNode {     int key;     int height;     AVLNode left;     AVLNode right;     public AVLNode(int key) {         this.key = key;         this.height = 1;     } } public class AVLTree {     private AVLNode root;     public void insert(int key) {         root = insert(root, key);     }     private AVLNode insert(AVLNode node, int key) {         if (node == null) {             return new AVLNode(key);         }         if (key < node.key) {             node.left = insert(node.left, key);         } else if (key > node.key) {             node.right = insert(node.right, key);         } else {             return node;         }         node.height = 1 + Math.max(height(node.left), height(node.right));         int balance = getBalance(node);         if (balance > 1 && key < node.left.key) {             return rightRotate(node);         }         if (balance < -1 && key > node.right.key) {             return leftRotate(node);         }         if (balance > 1 && key > node.left.key) {             node.left = leftRotate(node.left);             return rightRotate(node);         }         if (balance < -1 && key < node.right.key) {             node.right = rightRotate(node.right);             return leftRotate(node);         }         return node;     }     private int height(AVLNode node) {         if (node == null) {             return 0;         }         return node.height;     }     private int getBalance(AVLNode node) {         if (node == null) {             return 0;         }         return height(node.left) - height(node.right);     }     private AVLNode leftRotate(AVLNode node) {         AVLNode newRoot = node.right;         AVLNode temp = newRoot.left;         newRoot.left = node;         node.right = temp;         node.height = 1 + Math.max(height(node.left), height(node.right));         newRoot.height = 1 + Math.max(height(newRoot.left), height(newRoot.right));         return newRoot;     }     private AVLNode rightRotate(AVLNode node) {         AVLNode newRoot = node.left;         AVLNode temp = newRoot.right;         newRoot.right = node;         node.left = temp;         node.height = 1 + Math.max(height(node.left), height(node.right));         newRoot.height = 1 + Math.max(height(newRoot.left), height(newRoot.right));         return newRoot;     }     public void inorder() {         inorder(root);     }     private void inorder(AVLNode node) {         if (node != null) {             inorder(node.left);             System.out.print(node.key + " ");             inorder(node.right);         }     } } ```

This implementation includes the methods for inserting nodes, rotating the tree, and printing the tree in order. Note that this implementation assumes that the AVL tree stores integers as keys, but this can be easily modified to store other types of data.

### RED BLACK TREES

Red-black trees are a type of balanced binary search tree that store data in sorted order. They are self-balancing binary search trees, meaning that the tree is guaranteed to remain balanced even as new nodes are inserted and deleted. This makes operations such as searching, inserting, and deleting data very efficient.

In a red-black tree, each node is colored either red or black. The tree is always constructed in such a way that the root is black and all the leaves are black. There are two important rules that need to be followed while constructing a red-black tree:

1. No red node can have a red parent.
2. Every path from the root to a null leaf must contain the same number of black nodes.

These two rules guarantee that the tree is balanced, as the longest path from the root to a leaf is no more than twice as long as the shortest path.

To maintain the balance of the tree while inserting or deleting a node, the tree undergoes a series of color flips and rotations, called rotations, which rearrange the nodes in the tree to ensure that it remains balanced.

Here's an example of a simple implementation of a red-black tree in Java:

``` public class RedBlackTree<T extends Comparable<T>> {     private Node<T> root;     private static class Node<T extends Comparable<T>> {         T value;         Node<T> left, right;         int color;         Node(T value) {             this.value = value;             this.color = 1;         }     }     private boolean isRed(Node<T> node) {         if (node == null) {             return false;         }         return node.color == 1;     }     private Node<T> rotateLeft(Node<T> node) {         Node<T> x = node.right;         node.right = x.left;         x.left = node;         x.color = node.color;         node.color = 1;         return x;     }     private Node<T> rotateRight(Node<T> node) {         Node<T> x = node.left;         node.left = x.right;         x.right = node;         x.color = node.color;         node.color = 1;         return x;     }     private void flipColors(Node<T> node) {         node.color = 1 - node.color;         node.left.color = 1 - node.left.color;         node.right.color = 1 - node.right.color;     }     public void insert(T value) {         root = insert(root, value);         root.color = 0;     }     private Node<T> insert(Node<T> node, T value) {         if (node == null) {             return new Node<>(value);         }         if (value.compareTo(node.value) < 0) {             node.left = insert(node.left, value);         } else if (value.compareTo(node.value) > 0) {             node.right = insert(node.right, value);         } else {             return node;         }         if (isRed(node.right) && !isRed(node.left)) {             node = rotateLeft(node);         }         if (isRed(node.left) && isRed(node.left.left)) {             node = rotateRight(node);         }         if (isRed(node.left) && isRed(node.right)) {             flipColors(node);         }         return node;     }     public boolean contains(T value) {     Node<T> current = root;     while (current != null) {         int cmp = value.compareTo(current.value);         if (cmp < 0) {             current = current.left;         } else if (cmp > 0) {             current = current.right;         } else {             return true;         }     }     return false; } public void remove(T value) {     if (!contains(value)) {         return;     }     root = remove(root, value); } private Node<T> remove(Node<T> node, T value) {     int cmp = value.compareTo(node.value);     if (cmp < 0) {         if (!isRed(node.left) && !isRed(node.left.left)) {             node = moveRedLeft(node);         }         node.left = remove(node.left, value);     } else {         if (isRed(node.left)) {             node = rotateRight(node);         }         if (cmp == 0 && node.right == null) {             return null;         }         if (!isRed(node.right) && !isRed(node.right.left)) {             node = moveRedRight(node);         }         if (cmp == 0) {             Node<T> min = getMin(node.right);             node.value = min.value;             node.right = deleteMin(node.right);         } else {             node.right = remove(node.right, value);         }     }     return balance(node); } private Node<T> moveRedLeft(Node<T> node) {     flipColors(node);     if (isRed(node.right.left)) {         node.right = rotateRight(node.right);         node = rotateLeft(node);         flipColors(node);     }     return node; } private Node<T> moveRedRight(Node<T> node) {     flipColors(node);     if (isRed(node.left.left)) {         node = rotateRight(node);         flipColors(node);     }     return node; } private Node<T> getMin(Node<T> node) {     if (node.left == null) {         return node;     }     return getMin(node.left); } private Node<T> deleteMin(Node<T> node) {     if (node.left == null) {         return null;     }     if (!isRed(node.left) && !isRed(node.left.left)) {         node = moveRedLeft(node);     }     node.left = deleteMin(node.left);     return balance(node); } private Node<T> balance(Node<T> node) {     if (isRed(node.right)) {         node = rotateLeft(node);     }     if (isRed(node.left) && isRed(node.left.left)) {         node = rotateRight(node);     }     if (isRed(node.left) && isRed(node.right)) {         flipColors(node);     }     return node; } } ```

### PATTERNS

Same as for the binary trees.

Easy

Medium

Hard

## HEAPS

A heap is a specialized tree-based data structure that satisfies the heap property. The heap property requires that the parent node is either greater or less than both of its children. If the parent is greater than the children, then it's called a max heap, and if it's less than the children, it's called a min heap.

In other words, in a max heap, the value of each node is greater than or equal to its children, and in a min heap, the value of each node is less than or equal to its children. The root node is the maximum or minimum value in the tree.

Heaps are often used to implement priority queues, which are data structures that allow for efficient access to the highest or lowest priority item in the queue. They are also used in algorithms such as heap sort, which is an efficient in-place sorting algorithm.

The most common type of heap is the binary heap, which is a complete binary tree that satisfies the heap property. A complete binary tree is a binary tree where all levels except the last are completely filled, and the last level has all nodes as far left as possible.

Here's an example implementation of a max heap using a binary tree in Java:

``` public class Heap<T extends Comparable<T>> {     private Node<T> root;     private static class Node<T extends Comparable<T>> {         T value;         Node<T> left, right;         Node(T value) {             this.value = value;         }     }     public void insert(T value) {         root = insert(root, value);     }     private Node<T> insert(Node<T> node, T value) {         if (node == null) {             return new Node<>(value);         }         if (value.compareTo(node.value) <= 0) {             node.left = insert(node.left, value);         } else {             node.right = insert(node.right, value);         }         if (node.left != null && node.right != null) {             if (node.left.value.compareTo(node.right.value) > 0) {                 T temp = node.value;                 node.value = node.left.value;                 node.left.value = temp;             } else {                 T temp = node.value;                 node.value = node.right.value;                 node.right.value = temp;             }         }         return node;     }     public T extractMax() {         if (root == null) {             return null;         }         T max = root.value;         if (root.left == null && root.right == null) {             root = null;             return max;         }         if (root.left == null || root.right == null) {             root = (root.left != null) ? root.left : root.right;             return max;         }         Node<T> parent = root;         Node<T> node = root.right;         while (node.left != null) {             parent = node;             node = node.left;         }         max = node.value;         if (node.right != null) {             parent.left = node.right;         } else {             parent.left = null;         }         return max;     } } ```

### PATTERNS

Look at the problems below

### PROBLEMS

#### MERGE K SORTED ARRAYS

The Merge K Sorted Arrays pattern is used to merge k sorted arrays into a single sorted array efficiently. This problem can be solved using a min heap. The basic idea is to create a min heap of size k, with the smallest element from each of the k sorted arrays. We then repeatedly extract the minimum element from the min heap and add it to the output array, and replace the extracted element in the min heap with the next element from the array it came from.

Here are the steps to implement the Merge K Sorted Arrays pattern using a min heap:

1. Create a min heap of size k and insert the first element from each of the k sorted arrays into the min heap.
2. While the min heap is not empty:
1. Extract the minimum element from the min heap and add it to the output array.
2.  If there are more elements in the array the extracted element came from, insert the next element from that array into the min heap.
3. Return the output array.

Here's an implementation of this pattern in Java:

``` public int[] mergeKSortedArrays(int[][] arrays) {     PriorityQueue<int[]> minHeap = new PriorityQueue<>((a, b) -> a[0] - b[0]);     int n = arrays.length;     int k = arrays[0].length;     int[] result = new int[n * k];     int index = 0;     // Insert the first element from each array into the min heap     for (int i = 0; i < n; i++) {         minHeap.offer(new int[] { arrays[i][0], i, 0 });     }     // Extract the minimum element from the min heap and add it to the output array,     // and insert the next element from the same array into the min heap     while (!minHeap.isEmpty()) {         int[] min = minHeap.poll();         result[index++] = min[0];         int i = min[1];         int j = min[2];         if (j < k - 1) {             minHeap.offer(new int[] { arrays[i][j + 1], i, j + 1 });         }     }     return result; } ```

In this implementation, we use a PriorityQueue (which is implemented as a min heap) to store the elements from the k sorted arrays. We use a custom comparator to compare the first element of each array in the min heap. We also store the index of the array and the index of the element within the array as part of the min heap entry, so that we can insert the next element from the same array into the min heap when we extract an element.

#### KTH SELECTION

The Kth Selection pattern in heaps is used to find the Kth smallest or largest element in an unsorted list or array. It involves creating a heap of size K, and then iterating through the rest of the list or array, adding elements to the heap and removing the root (the smallest or largest element) if the size of the heap exceeds K.

Once all the elements have been added to the heap, the Kth smallest or largest element is the root of the heap. This pattern is useful when we need to find an element in the middle of a sorted array or list, without sorting the entire array or list.

The time complexity of the Kth Selection pattern in heaps is O(n log k), where n is the size of the input array or list and k is the value of K. The space complexity is O(k), since we need to create a heap of size K.

Here's an example implementation of the Kth Selection pattern in heaps in Java to find the Kth smallest element in an unsorted array:

``` public static int findKthSmallest(int[] nums, int k) {         PriorityQueue<Integer> maxHeap = new PriorityQueue<>(Collections.reverseOrder());         for (int i = 0; i < nums.length; i++) {             maxHeap.offer(nums[i]);             if (maxHeap.size() > k) {                 maxHeap.poll();             }         }         return maxHeap.poll(); } ```

In this example, we create a max heap of size K and add each element of the input array to the heap. If the size of the heap exceeds K, we remove the root (the largest element) from the heap. Finally, we return the root of the heap as the Kth smallest element.

Easy

Medium

Hard

## TRIE

A trie, also called a prefix tree, is a tree-like data structure that is used for efficient retrieval of keys from a large set of strings. A trie represents a collection of strings, where each node represents a prefix of one or more strings. The root of the trie represents an empty string, and the children of each node represent possible continuations of the string represented by the parent node.

The basic idea behind a trie is to store each character of a string in a separate node. This results in a tree-like structure where each path from the root to a leaf node represents a complete word or prefix. Each node may have several children, which correspond to the possible next characters in the words that can be formed from the prefix represented by the parent node.

Tries are typically used for applications where fast string lookups are required, such as spell-checking or autocomplete in text editors or search engines.

The advantages of a trie over a hash table or a binary search tree include:

1. It can perform string lookups and insertions in O(L) time, where L is the length of the string.
2. It can efficiently find all strings that have a given prefix.
3. It requires less space than a hash table, especially when storing a large number of small strings.

The main disadvantage of a trie is that it requires more memory than other data structures due to the large number of nodes required to represent all the possible substrings of the keys.

Here's an implementation of a trie in Java:

``` public class Trie {     private TrieNode root;     public Trie() {         root = new TrieNode();     }     public void insert(String word) {         TrieNode node = root;         for (int i = 0; i < word.length(); i++) {             char ch = word.charAt(i);             if (!node.containsKey(ch)) {                 node.put(ch, new TrieNode());             }             node = node.get(ch);         }         node.setEnd();     }     public boolean search(String word) {         TrieNode node = searchPrefix(word);         return node != null && node.isEnd();     }     public boolean startsWith(String prefix) {         TrieNode node = searchPrefix(prefix);         return node != null;     }     private TrieNode searchPrefix(String word) {         TrieNode node = root;         for (int i = 0; i < word.length(); i++) {             char ch = word.charAt(i);             if (node.containsKey(ch)) {                 node = node.get(ch);             } else {                 return null;             }         }         return node;     }     private static class TrieNode {         private TrieNode[] links;         private boolean isEnd;         public TrieNode() {             links = new TrieNode[26];         }         public boolean containsKey(char ch) {             return links[ch - 'a'] != null;         }         public TrieNode get(char ch) {             return links[ch - 'a'];         }         public void put(char ch, TrieNode node) {             links[ch - 'a'] = node;         }         public void setEnd() {             isEnd = true;         }         public boolean isEnd() {             return isEnd;         }     } } ```

This implementation uses an inner class TrieNode to represent each node in the trie. The TrieNode class has an array of TrieNode references, called links, which represent the children of the node. It also has a boolean flag isEnd that is set to true when the node represents the end of a word.

The Trie class has a single instance variable, the root node of the trie. It has three public methods:

• insert(String word): Inserts the given word into the trie.
• search(String word): Searches the trie for the given word. Returns true if the word is found, false otherwise.
• startsWith(String prefix): Searches the trie for words that start with the given prefix. Returns true if there is at least one word that starts with the prefix, false otherwise.

The Trie class also has a private method searchPrefix(String word) that returns the node corresponding to the given word. This method is used by the search and startsWith methods to traverse the trie to find the appropriate node.

In the table, k represents the maximum length of the keys and n represents the number of keys in the trie. The worst-case time complexity of insert, search, and delete operations in a trie is O(k), where k is the maximum length of the keys. The space complexity of a trie is O(nk), where n is the number of keys in the trie and k is the maximum length of the keys. This is because each node in the trie can have at most k children and there can be at most n nodes in the trie.

The space complexity of a trie can be reduced by compressing the trie using techniques like path compression, radix trees, or compressed tries. These techniques can reduce the space complexity of the trie to O(n) or O(n log n), depending on the implementation.

### PATTERNS

#### AUTOCOMPLETE

The autocomplete pattern is a design pattern used to implement the autocomplete feature in many software applications, such as search engines, text editors, and other tools. The idea is to use a trie data structure to store a dictionary of words, and then use this structure to suggest the possible completions of a partial input string.

A trie is a tree-like data structure that is used to store a collection of strings. Each node in the trie represents a single character in a string, and a path from the root to a leaf node represents a complete string. To implement the autocomplete pattern, we can use a trie to store a dictionary of words and their frequency counts.

The key idea behind the autocomplete pattern is to find the node in the trie that corresponds to the last character in the input string, and then traverse the subtree rooted at that node to find all the words that have that prefix. The words can then be sorted by their frequency counts and presented to the user as suggestions.

To implement the autocomplete pattern, we can use a trie data structure with some modifications. Each node in the trie should contain a frequency count of how many times the corresponding word has been added to the trie. We can use a priority queue to store the top-k most frequent words that have the given prefix.

Here's an example implementation in Java:

``` public List<String> autocomplete(String prefix, int k) {         List<String> result = new ArrayList<>();         TrieNode node = root;         for (char c : prefix.toCharArray()) {             if (!node.children.containsKey(c)) {                 return result;             }             node = node.children.get(c);         }         PriorityQueue<TrieNode> pq = new PriorityQueue<>(new Comparator<TrieNode>() {             public int compare(TrieNode a, TrieNode b) {                 return a.frequency == b.frequency ? a.word.compareTo(b.word) : b.frequency - a.frequency;             }         });         dfs(node, pq);         while (result.size() < k && !pq.isEmpty()) {             result.add(pq.poll().word);         }         return result; } ```

This implementation uses a priority queue to keep track of the top-k most frequent words that have the given prefix. The autocomplete method first finds the node in the trie that corresponds to the last character in the prefix, and then does a depth-first search to find all the words that have that prefix. The words are then added to the priority queue, and the top-k most frequent words are returned as suggestions.

#### COUNTING WORDS

The counting words pattern in tries is a technique to count the number of occurrences of a word or a substring in a given text. It involves constructing a trie data structure, where each node in the trie represents a prefix of one or more words in the text. The leaves of the trie represent the end of a word, and the number of times a word occurs in the text can be represented as the number of paths from the root to the leaf.

To count the number of occurrences of a word in the text, we can start at the root of the trie and follow the path corresponding to the letters of the word. If we reach a leaf node, then the word occurs in the text, and we increment the count associated with that leaf node. If we cannot follow the path corresponding to a letter in the word, then the word does not occur in the text.

To count the number of occurrences of a substring in the text, we can start at the root of the trie and follow the path corresponding to the letters of the substring. If we reach a leaf node, then we have found a word in the text that contains the substring. We can then traverse the subtree rooted at that leaf node to count the number of occurrences of the substring in the text.

The time complexity of constructing the trie is O(n), where n is the total length of all words in the text. The time complexity of counting the number of occurrences of a word or a substring is O(k), where k is the length of the word or substring.

Here is an example implementation in Java:

``` class TrieNode {     Map<Character, TrieNode> children;     int count;         public TrieNode() {         children = new HashMap<>();         count = 0;     } } class Trie {     TrieNode root;         public Trie() {         root = new TrieNode();     }         public void insert(String word) {         TrieNode node = root;         for (char c : word.toCharArray()) {             node.children.putIfAbsent(c, new TrieNode());             node = node.children.get(c);         }         node.count++;     }         public int countWords(String word) {         TrieNode node = root;         for (char c : word.toCharArray()) {             if (!node.children.containsKey(c)) {                 return 0;             }             node = node.children.get(c);         }         return node.count;     }         public int countSubstrings(String word) {         TrieNode node = root;         for (char c : word.toCharArray()) {             if (!node.children.containsKey(c)) {                 return 0;             }             node = node.children.get(c);         }         return countLeaves(node);     }         private int countLeaves(TrieNode node) {         if (node == null) {             return 0;         }         int count = node.count;         for (TrieNode child : node.children.values()) {             count += countLeaves(child);         }         return count;     } } ```

#### LONGEST COMMON PREFIX

The longest common prefix pattern in tries is used to find the longest common prefix among a set of strings. It involves building a trie with the input strings and traversing the trie to find the longest common prefix.

The algorithm for finding the longest common prefix is as follows:

1. Insert all the input strings into a trie.
2. Traverse the trie by following the edges that are common to all the input strings. The longest common prefix is the path from the root to the deepest node that has exactly one child. If there is no such node, then there is no common prefix.

The time complexity of this algorithm is O(nm), where n is the number of strings and m is the length of the longest string. The space complexity is also O(nm), since we need to store all the strings in the trie.

Here's an example implementation of the longest common prefix pattern in Java:

``` class TrieNode {     char val;     boolean isWord;     TrieNode[] children;     public TrieNode(char val) {         this.val = val;         this.isWord = false;         this.children = new TrieNode[26];     } } class Trie {     TrieNode root;     public Trie() {         this.root = new TrieNode(' ');     }     public void insert(String word) {         TrieNode curr = root;         for (int i = 0; i < word.length(); i++) {             char c = word.charAt(i);             if (curr.children[c - 'a'] == null) {                 curr.children[c - 'a'] = new TrieNode(c);             }             curr = curr.children[c - 'a'];         }         curr.isWord = true;     }     public String longestCommonPrefix() {         StringBuilder sb = new StringBuilder();         TrieNode curr = root;         while (curr != null && !curr.isWord && countChildren(curr) == 1) {             curr = getChild(curr);             sb.append(curr.val);         }         return sb.toString();     }     private int countChildren(TrieNode node) {         int count = 0;         for (TrieNode child : node.children) {             if (child != null) {                 count++;             }         }         return count;     }     private TrieNode getChild(TrieNode node) {         for (TrieNode child : node.children) {             if (child != null) {                 return child;             }         }         return null;     } } ```

In this implementation, we define a TrieNode class that represents a node in the trie, with a value, a boolean flag to indicate whether the node represents a complete word, and an array of children nodes. We also define a Trie class that represents the trie, with a root node and methods to insert a word into the trie, find the longest common prefix, count the number of children of a node, and get the first child of a node.

#### MAXIMUM XOR

The MAXIMUM XOR pattern is a problem-solving technique that involves using a trie data structure to solve problems related to bitwise XOR operations. The goal of the pattern is to find the maximum value that can be obtained by performing an XOR operation between two elements in a given array.

To solve this problem, we can create a trie data structure with all the binary representations of the numbers in the given array. We start with an empty trie, and for each number in the array, we insert its binary representation into the trie. Once we have constructed the trie, we can find the maximum XOR value by traversing the trie for each number in the array, and at each step, we choose the path that gives the maximum XOR value.

For example, suppose we have the array [3, 10, 5, 25, 2, 8]. We can construct a trie with the binary representations of these numbers as follows:

To find the maximum XOR value, we can start with the first number in the array (3), and traverse the trie, choosing the path that gives the maximum XOR value. In this case, we would choose the left branch at the first level (0), the right branch at the second level (2), and the left branch at the third level (0), giving us the value 3 XOR 10 = 9. We repeat this process for each number in the array, and keep track of the maximum value we find.

The time complexity of the MAXIMUM XOR pattern is O(n log m), where n is the number of elements in the array, and m is the maximum number of bits in any element of the array. The space complexity is also O(n log m), since we need to construct a trie to store the binary representations of the numbers in the array.

Here's an example Java implementation of the maximum XOR pattern in tries:

``` public class MaximumXOR {     public int findMaximumXOR(int[] nums) {         int max = 0, mask = 0;         for (int i = 31; i >= 0; i--) {             mask |= (1 << i);             Set<Integer> set = new HashSet<>();             for (int num : nums) {                 set.add(num & mask);             }             int tmp = max | (1 << i);             for (int prefix : set) {                 if (set.contains(prefix ^ tmp)) {                     max = tmp;                     break;                 }             }         }         return max;     } } ```

In this implementation, we use a set to store the prefixes of the binary representation of each number in the input array. We then iterate through the bits of the numbers, starting with the most significant bit, and consider all possible prefixes of that length.

For each prefix, we compute the corresponding prefix of the complement of the current maximum value (by XORing the prefix with the current maximum). If this complement is also in the set of prefixes, we know that the current bit can be set in the maximum value, so we set it accordingly. We continue this process for each bit of the numbers in the input array, and return the resulting maximum value.

This implementation has a time complexity of O(32n), or O(n), where n is the length of the input array. The space complexity is also O(n), since we store all prefixes in a set.

Easy

Medium

Hard