Blogs On Programming

Trees

Trees

by Vibrant Publishers on May 22, 2022
In this blog, we will discuss the Tree as a part of Graphs, as well as Tree traversal algorithms and Special types of trees. The tree data structure is used to present a hierarchical relationship between the data. The elements of the tree are called nodes. Starting from the root (initial) node, each node has a certain number of children. The nodes higher in the hierarchy are called the parent nodes and the children are called the child nodes. The nodes having the same parent are called siblings. The node with no child node is called a leaf node. The level of a node is the depth of the tree from that node to the root node. Since the tree is a hierarchical structure, the child node at one level can act as the parent node for the nodes at the next level. The tree in which each node has a maximum of 2 child nodes is called a Binary tree. Nodes of a binary tree can be represented using a structure or a class consisting of node information and a pointer each for the left and the right child node. The process of accessing the elements of a tree is called tree traversal. There are 4 types of tree traversal algorithms: 1. Inorder traversal: For Inorder traversal, the left subtree of a node is accessed first followed by the value of the node and then the right subtree of the node. For example, consider the following tree.     The Inorder traversal of the tree in Fig 1 will be 8,4,9,2,10,5,11,1,12,6,13,3,14,7,15. 2. Preorder traversal: For Preorder traversal, the value of the node is accessed first followed by the left subtree of a node and then the right subtree of the node. The Preorder traversal of the tree in Fig 1 will be 1,2,4,8,8,5,10,11,3,6,12,13,7,14,15. 3. Postorder traversal: For Postorder traversal, the left subtree of the node is accessed first followed by the right subtree of a node, and then the value of the node. The Postorder traversal of the tree in Fig 1 will be 8,9,4,10,11,5,12,13,6,14,15,7,3,1.  4. Level order traversal: For Level order traversal, starting from the root node, the nodes at a single level are accessed first before moving to the next level. Level order traversal is also Breadth-first traversal (BSF). The Level order traversal of the tree in Fig 1 will be 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15. Binary Search Tree can also be converted to a doubly linked list such that the nodes of the doubly linked list are placed according to the inorder traversal of the tree. Special types of trees: 1. Binary Search Trees: A Binary search tree (BST) is a special type of tree in which the nodes are sorted according to the Inorder traversal. The search time complexity in a binary tree is O(log n). Insertion in a BST is achieved by moving to the left subtree if the value of the node to be inserted is lower than the current node or by moving to the right subtree if the value of the node to be inserted is greater than the current node. This process is repeated until a leaf node is found. 2. AVL Trees: AVL trees are BSTs in which for each node, the difference between the max level of the left subtree and the max level of the right subtree is not more than 1. AVL trees are also called the self-balancing BSTs. 3. Red Black Trees: Red Black trees are BSTs in which the nodes are colored either black or red with the root node always being black. No adjacent nodes in a Red Black tree can be red and for each node, any path to a leaf node has the same number of black nodes. Like AVL trees, Red Black trees are also self-balancing BSTs. 4. Trie: Trie is a special type of independent data structure which is in the form of a tree. It is generally used for string processing. Each node in a Trie has 26 child nodes indicating one of 26 English characters. Trie also has a Boolean data element marking the end of a string. The structure of a Trie can differ to incorporate various use cases. 5. Threaded Binary Trees: Threaded binary trees are used to make an iterative algorithm for the inorder traversal of a tree. The idea is to point the null right child of any node to its inorder successor. There are two types of threaded binary trees. a. Single-threaded: Only the right null child of each node points towards the inorder successor of the tree. b. Double-threaded: Both the left and the right null child of each node point towards the inorder predecessor and inorder successor of the node respectively. 6. Expression Trees: An Expression tree is a special type of tree used to solve mathematical expressions. Each non-leaf node in an expression tree represents an operator and each leaf node is an operand.   Ending note: In this blog, we discussed the Tree in data structures and Tree Transversal Algorithms used in Machine learning. Special trees help to solve different problems in optimized time and space. Overall, trees are advantageous as they depict a structural correlation between the data. Moreover, trees also provide flexible insertion and sorting advantages. Get one step closer to your dream job! Check out the books we have, which are designed to help you clear your interview with flying colors, no matter which field you are in. These include HR Interview Questions You’ll Most Likely Be Asked (Third Edition) and Innovative Interview Questions You’ll Most Likely Be Asked.
Sorting Algorithms

Sorting Algorithms

by Vibrant Publishers on May 22, 2022
Algorithms are sequenced steps of instructions proposing a generalized solution for a problem. Algorithms determine the efficiency of a coding solution. They are divided into different categories depending on their nature of implementation. In this blog, we will discuss Sorting Algorithms focusing on their description, the way they work, and some common implementations.     Sorting Algorithm: As the name describes, the sorting algorithm is generally used to sort an array in a particular order. An array sorted in an ascending order means that every successor element in an array is greater than the previous one. A sorting algorithm takes an array as an input, performs sorting operations on it, and outputs a permutation of the array that is now sorted. Array {a, b, c, d} is alphabetically sorted. Array {1, 2, 3, 4} is sorted in ascending order. Generally, sorting algorithms are divided into two types: Comparison Sort and Integer Sort.   Comparison Sort: Comparison sort algorithms compare elements at each step to determine their position in the array. Such algorithms are easy to implement but are slower. They are bounded by O(nlogn), which means on average, comparison sorts cannot be faster than O(nlogn).     Integer Sort: Integer sort algorithms are also known as counting algorithms. The integer sort algorithm checks for each element, say x, how many elements are smaller than x and places x at that location in an array. For element x, if 10 elements are less than x then the position of element x is at index 11. Such algorithms do not perform comparisons and are thus not bound by Ω (nlogn). The efficiency of the selected sorting algorithm is determined by its run time complexity and space complexity.   Stability of a Sorting Algorithm: A sorting algorithm is said to be stable if it preserves the order of the same or equal keys in the output array as it is in the input array. Keys are the values based on which algorithm is sorting an array. Below is an example of stable sorting, Following is an unstable sorting as the order of equal keys is not preserved in the output. Next, let’s discuss some commonly used sorting algorithms.   Insertion Sort: This is a comparison-based algorithm. It takes one element, finds its place in the array, places it there, and in doing so sorts the whole array. For an array of size n, insertion sort considers the first element on the left as a sorted array and all the remaining n-1 elements on the right as an unsorted array. It then picks the first unsorted element (element number 2 of the array) and places it with a sorted element on the left moving elements if necessary. Now there are two arrays, a sorted array of size 2 and an unsorted of size n-2. The process continues until we get the whole array sorted, starting from the left. The best case of insertion sort is O(N) and the worst-case O(N^2).     Selection Sort: Selection sort is quite an easy algorithm in terms of implementation. It selects the smallest element present in an array and replaces it with the first element. It again scans for the smallest element in the remaining n-1 array and replaces it with the second element or the first element of the unsorted (n-1) array. The process of selecting the smallest element and replacing it continues until the whole array is sorted. The selection sort algorithm has the best and worst-case of O(N^2).     Merge Sort: Merge is a comparison-based algorithm that works on merging two sorted arrays. The technique used by the merge sort is divide and conquer. It divides the array into two subarrays, performs sorting on them separately, either recursively or iteratively, and then merges these two sorted subarrays. The result is a sorted array. Merge sort works in O(nlogn) run time.     Heap Sort: The comparison-based heap sort algorithm uses a binary heap data structure for sorting an array. A max-heap is formed from an unsorted array. The largest element from the binary heap is selected. As it is max-heap, the root is the largest value. This maximum value is placed at the end of an array. The heap shrinks by 1 element and the array increases by 1 element. Again, the above process is applied to the remaining heap. That is, convert it into max-heap and then replace the root (maximum) element with the last element. The process is repeated till we get a sorted array and the heap is shrunk to 0 elements. The run time of heap sort is O(nlogn).     Quick Sort: Quicksort works on the divide and conquer strategy. It selects a pivot element and forms two subarrays around this pivot. Suppose the pivot element is A[y]. Two subarrays are sorted as A[x,… y-1] and A[y+1,… z] such that all elements less than the pivot are in one subarray, and all elements greater than the pivot are in the second subarray. The subarrays can be sorted recursively or iteratively. The outcome is a sorted array. The average run time complexity of a quick sort is O(nlogn).     Bubble Sort: This comparison-based sorting algorithm compares elements of an array in pairs. The algorithm ‘bubbles’ through the entire array from left to right, considering two elements at a time and swapping the greater element with the smaller element of the pair. For an array A, element A[0] is compared with element A[1]. If element A[0] > A[1], they are swapped. Next, elements A[1] and A[2] are compared and swapped if required. These two steps are repeated for an entire array. The average run time complexity of Bubble sort is O(n2) and is considered an inefficient algorithm.     Shell Sort: Shell sort algorithm, in a way, works on insertion sort. It is considered faster than the insertion sort itself. It starts by sorting subsets of the entire array. Gradually the size of subarrays is increased till we get a complete sorted array as a result. In other words, shell sort partially sorts the array elements and then applies insertion sort on the entire array.   Shell sort is generally optimized using different methods to increase the size of subsets. The most commonly used method is Knuth’s method. The worst case of shell run time is O(n^(3/2) using Knuth’s method.     Distribution Sort Algorithms: Sorting algorithms where input is distributed into substructure, sorted, and then combined at the output are distribution sort algorithms. Many merge sort algorithms are distribution sort algorithms. Radix sort is an example of distribution sorting. Counting Sort: Counting sort is an integer-based sorting algorithm instead of a comparison-based sorting algorithm. The algorithm works on the assumption that every element in the input list has a key value ranging from 0 to k, where k is an integer. For each element, the algorithm determines all the elements that are smaller than it, and in this way places that element at an appropriate location in the output list. For this, the algorithm maintains three lists – one as an input list, the second as a temporary list for key values, and the third as an output list. Counting sort is considered as an efficient sorting algorithm with a run time of Θ(n) where the size of the input list, n, is not much smaller than the largest key value of the input list, k.   Radix Sort: Radix sort works on the subarrays and uses the counting sort algorithm to sort them. It groups the individual keys that take the same place and value. It sorts from the least significant digit to the most significant digit. In base ten, radix sort will first sort digits in 1’s place, then at 10’s place, and so on. The sorting is done using the counting sort algorithm. Counting sort can sort elements in one place value. As an example, base 10, can sort from 0 to 9. For 2-digit numbers, it will have to deal with base 100. Radix sort, on the other hand, can handle multi-digit numbers without dealing with a large range of keys. The list [46, 23, 51, 59] will be sorted as [51, 23, 46, 59] as for 1’s place value 1<3<6<9. Sorting of second place value will give [23, 46, 51, 58]. Another important property of the radix sort is its stability. It is a stable sorting algorithm. The runtime of radix sort is O(n) which means it takes a linear time for sorting.   Ending Note: Sorting algorithms are quite important in computer science as they help in reducing the complexity of the problem. The sorting algorithms that we discussed above have useful applications in databases, search algorithms, data structure, and many other fields of computer science.     Get one step closer to your dream job! Prepare for your interview by supplementing your technical knowledge. Our Job Interview Questions Book Series is designed for this very purpose, aiming to prepare you for HR questions you’ll most likely be asked. Check out the book here! panels.
Future Job Market for Java Professionals

Future Job Market for Java Professionals

by Vibrant Publishers on May 22, 2022
When contemplating opportunities for Java professionals, many dismiss it as an outdated technology option, thinking, will Java remain in demand? Let us dispel this illusion for you with some illuminating facts. According to CODEGYM, the Java job market had more opportunities in the year 2020 than ever before, regardless of the global pandemic crisis. The market witnessed several Java job vacancies. Even the job switch rate for Java professionals is less than 8%, compared to 27% for the software development profession as a whole and 35% for database administrators. Also, given a higher-level administrative role, a significant number of Java developers simply prefer to retain their position. If this isn’t the best proof that Java programming is the ideal career choice for the vast majority of professionals, we don’t know what is. We can state that the future for Java professionals is as bright as the rising sun. Now that it’s pretty clear the Java job market is not going anywhere, Java job vacancies will witness an increasing trend in the coming decade. So, let us begin with what this beloved programming language has in store for all Java Job seekers out there. What can Java professionals expect from the Java job market in the near future? According to reports, the IT industry will witness significant job shifts, resulting in numerous Java job vacancies. With the advancement of the tech industry, the Java job market will experience a favorable turn as many corporations rely on this programming language to create their core offerings. Java job vacancies reach a rather high ranking with every season. The above facts back up the Java obsession. Fresh graduates and experienced professionals remain inclined towards Java. Presently, there are over 7 million Java developers, with around 0.5 million new coders joining the Java community each year. This showcases the level of aggressive competition in the field for Java developers. How can Java professionals achieve lucrative jobs in the cut-throat Java job market? Java is unquestionably an excellent way to begin for people who are new to coding. Even the Java professionals who have sustained in the market till now need to stay updated with the market needs to scale their careers to new heights. However, getting past the screenings and interviews for a Java job is no cakewalk. Coders have to go through various technical rounds of tests and interviews. Adequate preparation of commonly asked questions in a Java job interview is crucial for selection. One can unearth smart tactics from specially compiled books that put the candidate in charge and push them to do their best. Acquiring additional skills in Java-related technologies can also provide you an edge in the Java job market. What is the scope for other Java-related technologies/platforms? Java professionals can multiply their prospects for a high-paying job by upgrading their skills and mastering related tools and frameworks. This language has several frameworks (VUE.js, jQuery, Angualr.js, and React.js) with a strong presence in the market and is continually expanding. As per the Devskillers report, Javascript stands as the most sought-after IT skill among developers worldwide. At least 80 percent of the websites rely on a third-party JavaScript web framework or library to perform client-side scripting tasks. To understand the potential of Java in the job market, it is also necessary to know how it fairs in contrast to other popular languages. So, before we move further, let’s take a peek at Java’s performance in contrast to other competing languages such as Python and C++. Where does Java stand in comparison to other eminent languages like Python and C++? The most common programming languages used besides Java are C++ and Python. These have been the foundation for almost all prominent development projects worldwide. C++ has grown in popularity as a fast and compiled programming language, and it is among the first languages that a newbie programmer learns. However, in contrast to Java, the language has lower portability. C++ programmers are anticipated to have a satisfactory amount of opportunities at least until 2022. Candidates with substantial expertise will have bright prospects and multiple avenues in C++ programming. On the other hand, Python is an interpreted language that is both current and quick to type. The code is 3-4 times shorter than that of Java. There has always been a tussle between Python and Java in terms of their demand. Python programmers have a very strong job market for web development projects. As a result, there is always business for a Python coder. Java remains popular because of its platform independence. Programmers have used this language to give life to many popular apps and software systems. Java is also being employed to create solutions for machine learning, genetic programming, and multi-robotic systems, all of which have a promising future—no wonder the Java job market continues to thrive. Conquer greater heights in the Java job market with robust preparedness! Some of the world’s biggest companies, such as Twitter, Google, LinkedIn, eBay, and Amazon, use Java to establish a coherent architecture between their web application and backend systems. This fact emphasizes the scope and appeal of Java Programmers. As a result, the salaries of Java Programmers in many countries are among the highest in the computer and internet networking industries. Artificial Intelligence, Big Data, and Blockchain are some of the new technologies where the scope of Java is likely to expand. Therefore, being on your feet and educating yourself to meet industry standards improves your chances of landing a successful job in Java programming. A great deal of preparation with assistance from Vibrant Publishers will help you accomplish this goal. Remember friends, the best way to predict your future is to create it! We wish you success! The new edition of the book, Core Java Interview Questions You’ll Most Likely Be Asked, released in Sep 2021, has all the goods of the old edition plus additions like the latest Java interview questions, scenario-based questions, and a tutorial on building an ATS-compliant resume. The book is ideal for job seekers with zero to five years of experience.
Searching Algorithms

Searching Algorithms

by Vibrant Publishers on May 22, 2022
Searching algorithms belong to a category of widely used algorithms in Computer Science. Searching algorithms are mainly used to lookup an entry from data structures, such as linked lists, trees, graphs, etc. In this blog, we will discuss the working, types, and a few implementations of searching algorithms. Searching Algorithm: A searching algorithm is used to locate and retrieve an element – say x – from a data structure where it is stored – say a list. The algorithm returns the location of the searched element or indicates if it is not present in the data structure. Searching algorithms are categorized into two types depending upon their search strategy. Sequential Search: These algorithms linearly search the whole data structure by visiting every element. These algorithms are, therefore, comparatively slower. An example is a linear search algorithm. Interval Search: these algorithms perform searching in a sorted data structure. The idea is to target the center of the data structure and then perform a search operation on the two halves. These algorithms are efficient. An example is a binary search algorithm. Next, let’s discuss some commonly used searching algorithms. Linear Search: The very basic searching algorithm traverses the data structure sequentially. The algorithm visits every element one by one until the match is found. If the element is not present in the data structure, -1 is returned. Linear search applies to both sorted and unsorted data structures. Although the implementation is quite easy, linear search has a time complexity of O(n), where n is the size of the data structure. It means it is linearly dependent on the size and quite slow in terms of efficiency. Binary Search: Binary search works on a sorted data structure. It finds the middle of the list. If the middle element is the required element, it is returned as a result. Otherwise, it is compared with the required element. If the middle element is smaller than the required element, the right half is considered for further search, else the left half is considered. Again, the middle of the selected half is selected and compared with the required value. The process is repeated until the required value is found. Binary search with time complexity of O(nlogn) is an efficient search algorithm, especially for large and sorted data records. Interpolation Search: Interpolation search is a combination of binary and linear search algorithms. Also called an extrapolation search, this algorithm first divides the data structure into two halves (like binary search) and then sequentially searches in the required half (linear search). In this way, it saves from dividing the data structure into two halves every time as in a binary search. For an array, arr[], x is the element to be searched. The ‘start’ is the starting index of the array and ‘end’ is the last index. The position of the element x is calculated by the formula: position = start + [ (x-arr[start])*(end-start) / (arr[end]-list[start]) ] The position is calculated and returned if it is a match. If the result is less than the required element, the position in the left sub-array is calculated. Else, the position is calculated in the right sub-array. The process is repeated until the element is found or the array reduces to zero. Interpolation search works when the data structure is sorted. When the elements are evenly distributed, the average run time of interpolation search is O(log(log(n))). Breadth-First Search Algorithm: BFS is used to search for a value in a graph or tree data structure. It is a recursive algorithm and traverses the vertices to reach the required value. The algorithm maintains two lists of visited vertices and non-visited vertices. It is implemented using a queue data structure. The idea is to visit every node at one level (breadth-wise) before moving to the next level of the graph. The algorithm selects the first element from the queue and adds it to the visited list. All the adjacent vertices (neighbors) of this element are traversed. If a neighbor is not present in the visited list, it means it is not traversed yet. It is placed at the back of the queue and in the visited list. If a neighbor is present in the visited list already, it means it has been traversed and is ignored. The process is repeated until the required element is found or the queue is empty. The following example will make the concept clear of breadth-first traversal.       Depth First Search Algorithm: This is another recursive search algorithm used for graphs and trees. In contrast to the BFS algorithm, DFS works on the strategy of backtracking. The algorithm keeps on moving to the next levels (depth-wise) until there are no more nodes. The algorithm then backtracks to the upper level and repeats for the other node. The DFS algorithm is implemented using a stack data structure. The algorithm pops an element from the top of the stack and places it in the visited list. All the nodes adjacent to this element are also selected. If any adjacent node value is not present in the visited list, it means it is not traversed yet, and it is placed at the stack top. Steps are repeated until the element is found or the stack is empty. See the following example for a better understanding of depth-first traversal.       Next, let’s discuss some common algorithm solutions for searching problems. Problem 1: Find the largest element in an array of unique elements. Algorithm: We can use Linear Search to find the largest element in an array of size n. The time complexity would be O(n). Initialize an integer value as max = 0. In a loop, visit each element in an array and compare it with the max value. If the element is greater than the max, swap it with the max value. When the loop ends, the max will contain the largest element of the array. Problem 2: Find the position of an element in an infinite sorted array Algorithm: A sorted array indicates that a binary search algorithm can work here. Another point to note is that the array is infinite. It means we don’t know the maximum index or upper bound of the array. To solve such a problem, we will first find the upper index and then apply the binary algorithm on the array. To find the upper bound/index of the infinite array: Initialize integer min, max, and value for minimum, maximum, and first array element. Set min, max, and value as 0, 1, and arr[0]. Run a loop until the value is less than the element we are looking for. If the value is less than the element swap min with the max, double the max, and store arr[max] in value. At the end of the loop, we get lower and upper bounds of the array in min and max. We can call binary search on these values. For binary search: If the max value is greater than 1, find mid of the array by min + (max -1)/2. If mid is the value we are looking for, return mid. If mid is less than the search value, call binary search on the left half of the array. Take mid-1 as the maximum bound. Else, call binary search on the right half of the array, and take mid+1 as the minimum bound.   Problem 3: Find the number of occurrence for an element in a sorted array Algorithm: To find how many times the element appears in a sorted array, we can use both linear and binary search. In linear search, maintain two integer variables result and key. Loop over the entire array, on every occurrence of the key, increment the result by 1. At the end of the loop, the result contains the number of occurrences of the key. Problem 4: Find if the given array is a subset of another array: Algorithm: For this problem, we don’t know if the array is sorted or not. The algorithm for linear search includes running two loops for array 1 and array 2. The outer loop selects elements of array 2 one by one. The inner loop performs a linear search in array 1 for array 2’s element. The complexity of this algorithm is O(mxn), where m and n are the sizes of array 1 and array 2, respectively. The problem can also be solved using binary search after sorting the array. First sort array 1. The time complexity of sorting is O(m log m). For each element of array 2, call binary search on array 1. Time complexity is O(n log m). The total time complexity for the algorithm is O(m logm + n logm), where m and n are the sizes of arrays 1 and 2, respectively. Ending Note: In this blog on searching algorithms, we discussed a very basic algorithm type. Search algorithms are the most commonly used algorithms, as the concept of searching data and records is quite common in software development. The records from databases, servers, files, and other storage locations are searched and retrieved frequently. Efficient searching algorithms with efficient time and storage complexity play a key role here. Get one step closer to your dream job!   Prepare for your programming interview with Data Structures & Algorithms Interview Questions You’ll Most Likely Be Asked. This book has a whopping 200 technical interview questions on lists, queues, stacks, and algorithms and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.
Huffman Coding

Huffman Coding

by Vibrant Publishers on May 22, 2022
The concept of data compression is very common in computer applications. Information is transmitted as a bit-stream of 0’s and 1’s over the network. The goal is to transmit information over the network with a minimum number of bits. Such transmission is faster in terms of speed and bandwidth. Huffman Coding is one such technique of data compression. This blog discusses Huffman coding, how it works, its implementation, and some applications of the method.   What is Huffman Coding? This data compression method assigns codes to unique pieces of data for transmission. The data byte occurring most often is assigned a shorter code whereas the next most occurring byte is assigned a longer code. In this way, the whole data is encoded into bytes. The two types of encoding methods used by Huffman coding are Static Huffman encoding and Dynamic Huffman encoding. Static Huffman encoding encodes text using fixed-sized codes. Dynamic Huffman coding, on the other hand, encodes according to the frequency of data characters. This approach is often known as variable encoding.   Huffman Coding Implementation: Huffman coding is done in two steps. The first step is to build a binary tree of data characters depending on the character frequency. The second step is to encode these binary codes in bits of 0’s and 1’s.   Building a Huffman Tree: The binary tree consists of two types of nodes; leaf nodes and parent nodes. Each node contains the number of occurrences of a character. The parent node or internal node has two children. The left child is indicated by bit 0 and right the child by bit 1. The algorithm for building a Huffman binary tree using a priority queue includes the following steps: 1- Each character is added to the queue as a leaf node. 2- While the queue is not empty, pick two elements from the queue front, and create a parent node with these two as child nodes. The frequency of the parent node is the sum of these two nodes. Add this node to the priority queue. Consider the following example for a better understanding. Suppose we have characters p, q, r, s, and t with frequencies 4, 6, 7, 7, and 16. Now, build the binary tree:       Encoding the Huffman binary tree: As discussed above, the left nodes are assigned 0 bit and the right nodes 1 bit. In this way, codes are generated for all paths from the root node to any child node.   Huffman Compression technique: While building the Huffman tree, we have applied the technique of Huffman compression. It is also called Huffman encoding. We have encoded the data characters into the bits of 0s and 1s. This method reduces the overall bit size of the data. Hence, it is called the compression technique. Let’s see how our above data characters p, q, r, s, and t are compressed.     Considering each character is represented by 8 bits or 1 byte in computer language, the total bit size of data characters (4 p, 6 q, 7 r, and 16 t) is 40 * 8 = 320. After the Huffman compression algorithm, the bit size of the data reduces to 40 + 40 + 88 = 168 bits.   Huffman Decompression Technique: For the decompression technique, we need codes. Using these codes, we traverse the Huffman tree and decode the data. We start from the root node, assign 0 to the left node, and 1 to the right node. When a leaf node is encountered, we stop.     To decode character t, we will start from the root node, and traverse the path of 111 till we reach a leaf node that is, t.   Time Complexity of Huffman Coding Algorithm: As Huffman coding is implemented using a binary tree, it takes O(nlogn) time, where n is the number of data characters compressed.   Ending Note: In this blog on Huffman coding, we have discussed an algorithm that forms the basis of many compression techniques used in software development. Various compression formats like GZIP and WinZip use Huffman coding. Image compression techniques like JPEG and PNG also work on the Huffman algorithm. Although it is sometimes deemed as a slow technique, especially for digital media compression, the algorithm is still widely used due to its storage efficiency and straightforward implementation.   Get one step closer to your dream job!   Check out the books we have, which are designed to help you clear your interview with flying colors, no matter which field you are in. These include HR Interview Questions You’ll Most Likely Be Asked (Third Edition) and Innovative Interview Questions You’ll Most Likely Be Asked.  
Graphs in Data Structures

Graphs in Data Structures

by Vibrant Publishers on May 22, 2022
In this blog on Graphs in Data Structures, we will learn about graph representation, operations, some graph-related algorithms, and their uses. The graph data structure is widely used in real-life applications. Its main implementation is seen in networking, whether it is on the internet or in transportation. A network of nodes is easily represented through a graph. Here, nodes can be people on a social media platform like Facebook friends or LinkedIn connections or some electric or electronic components in a circuit. Telecommunication and civil engineering networks also make use of the graph data structure.   Graphs in Data Structures: Graphs in data structures consist of a finite number of nodes connected through a set of edges. Graphs visualize a relationship between objects or entities. Nodes in graphs are entities. The relationship between them is defined through paths called edges. Nodes are also called Vertices of graphs. Edges represent a unidirectional or bi-directional relation between the vertices. An undirected edge between two entities or nodes, A and B, represents a bi-directional relation between them. The root node is the parent node of all other nodes in a graph, and it does not have a parent node itself. Every graph has a single root node. Leaf nodes have no child nodes but only contain parent nodes. There are no outgoing edges from leaf nodes.   Types of Graphs: Undirected Graph: A graph with all bi-directional edges is an undirected graph. In them undirected graph below, we can traverse from node A to node B as well as from node B to node A as the edges are bi-directional.     Directed Graph: This graph consists of all unidirectional edges. All edges are pointing in a single direction. In the directed graph below, we can traverse from node A to node B but not from node B to node A as the edges are unidirectional.       A tree is a special type of undirected graph. Any two nodes in a tree are connected only by one edge. In this way, there are no closed loops or circles in a tree., whereas looped paths are present in graphs.     The maximum number of edges in a tree with n nodes is n-1. In graphs, nodes can have more than one parent node, but in trees, each node can only have one parent node except for the root node that has no parent node.   Graph representation: Usually, graphs in data structures are represented in two ways. Adjacency Matrix and Adjacency List. First, let’s see: what is an adjacency in a graph? We say that a vertex or a node is adjacent to another vertex if they have an edge between them. Otherwise, the nodes are non-adjacent. In the graph below, vertex B is adjacent to C but non-adjacent to node D.     Adjacency Matrix: In the adjacency matrix, a graph is represented as a 2-dimensional array. The two dimensions in the form of rows and columns represent the vertices and their value. The values in the form of 1 and 0 tell whether the edge is present between the two vertices or not. For example, if the value of M[A][D] is 1, it means there is an edge between the vertices A and D of the graph. See the graph below and then its adjacency matrix representation for better understanding.     The adjacency matrix gives the benefit of an efficient edge lookup, that is, whether the two vertices have a connected edge or not. But at the same time, it requires more space to store all the possible paths between the edges. For fast results, we have to compromise on space.   Adjacency Lists: In the adjacency list, an array of lists is used to represent a graph. Each list contains all edges adjacent to a particular vertex. For example, list LA will contain all the edges adjacent to the vertex A of a graph. As the adjacency list stores information only for edges that are present in the graph, it proves to be a space-efficient representation. For cases such as sparse matrix where zero values are more, an adjacency list is a good option, whereas an adjacency matrix will take up a lot of unnecessary space. The adjacency list of the above graph would be as,     Graph Operations: Some common operations that can be performed on a graph are: Element search Graph traversal Insertion in graph Finding a path between two nodes   Graph Traversal Algorithms: Visiting the vertices of a graph in a specific order is called graph traversal. Many graph applications require graph traversal according to its topology. Two common algorithms for graph traversal are Breadth-First Search and Depth First Search.   Breadth-First Search Graph Traversal: The breadth-first search approach visits the vertices breadthwise. It covers all the vertices at one level of the graph before moving on to the next level. The breadth-first search algorithm is implemented using a queue. The BFS algorithm includes three steps: Pick a vertex and insert all its adjacent vertices in a queue. This will be level 0 of the graph. Dequeue a vertex, mark it visited, and insert all its adjacent vertices in the queue. This will be level 1. If the vertex has no adjacent nodes, delete it. Repeat the above two steps until the whole graph is covered or the queue is empty. Consider the graph below where we have to search node E.     First, we will visit node A then move on to the next level and visit nodes B and C. When nodes at this level are finished we move on to the next level and visit nodes D and E. Nodes once visited can be stored in an array, whereas adjacent nodes are stored in a queue. The time complexity of the BFS graph is 0(V+E), where V is the number of vertices and E is the number of edges.   Depth First search Graph Traversal: The depth-first search (DFS) algorithm works on the approach of backtracking. It keeps moving on to the next levels, or else it backtracks. DFS algorithms can be implemented using stacks. It includes the following steps. Pick a node and push all its adjacent vertices onto a stack. Pop a vertex, mark it visited, and push all its adjacent nodes onto a stack. If there are no adjacent vertices, delete the node. Repeat the above two steps until the required result is achieved or the stack gets empty. Consider the graph below: To reach node E we start from node A and move downwards, as deep as we can go. We will reach node C. Now, we backtrack to node B, to go downwards towards another adjacent vertex of B. In this way, we reach node E.     IsReachable is a common problem scenario for graphs in Data Structures. The problem states to find whether there is a path between two vertices or not, that is whether a vertex is reachable from the other vertex or not. If the path is present, the function returns true else false. This problem can be solved with both of the traversal approaches we have discussed above, that is BFS and DFS.     Consider a function with three input parameters. The graph and the two vertices we are interested in. Let’s call them u and v. Using a BFS algorithm, the solution consists of the following steps: Initialize an array to store visited vertices of size equal to the size of the graph. Create a queue and enqueue the first vertex, in this case, u = 1. Mark u as visited and store it in the array. Add all the adjacent vertices of u in the queue. Now, dequeue the front element from the queue. Enqueue all its adjacent vertices in the queue. If any vertex is the required vertex v, return true. Else continue visiting adjacent nodes and keep on adding them to the queue. Repeat the above process in a loop. Since there is a path from vertex 1 to vertex 3 we will get a true value. If there is no path, for example, from node 1 to node 8, our queue will run empty, and false will be returned, indicating that vertex u is not reachable to vertex v. Below is an implementation of the IsReachable function for the above graph in Python programming language.       Weighted Graphs: Weighted graphs or di-graphs have weights or costs assigned to the edges present in the graph. Such weighted graphs are used in problems such as finding the shortest path between two nodes on the graph. These graph implementations help in several real-life scenarios. For example, a weighted graph approach is used in computing the shortest route on the map or to trace out the shortest path for delivery service in the city. Using the cost on each edge we can compute the fastest path.   Weighted graphs lead us to an important application of graphs, namely finding the shortest path on the graph. Let’s see how we can do it.     Finding the Shortest Path on the Graph: The goal is to find the path between two vertices such that the sum of the costs of their edges is minimal. If the cost or weight of each edge is one, then the shortest path can be calculated using a Breadth-first Search approach. But if costs are different, then different algorithms can be used to get the shortest path. Three algorithms are commonly used for this problem. Bellman Ford’s Algorithm Dijkstra’s Algorithm Floyd-Warshall’s Algorithm This is an interesting read on shortest path algorithms.   Minimum Spanning Tree: A spanning tree is a sub-graph of an undirected graph containing all the vertices of the graph with the minimum possible number of edges. If any vertex is missing it is not a spanning tree. The total number of spanning trees that can be formed from n vertices of a graph is n(n-2). A type of spanning tree where the sum of the weights of the edges is minimum is called a Minimum Spanning Tree. Let’s elaborate on this concept through an example. Consider the following graph.     The possible spanning trees for the above graph are:       The Minimum Spanning tree for the above graph is where the sum of edges’ weights = 7.   Incidence Matrix: The Incidence matrix relates the vertices and edges of a graph. It stores vertices as rows and edges as columns. In contrast, to the adjacency matrix where both rows and columns are vertices. An incidence matrix of an undirected graph is nxm matrix A where n is the number of vertices of the graph and m is the number of edges between them. If Ai,j =1, it means the vertex vi and edge ej are incident.     Incidence List: Just like the adjacency list, there is also an incidence list representation of a graph. The incident list is implemented using an array that stores all the edges incident to a vertex. For the following graph, the list Li shows the list of all edges incident to vertex A in the above graph.     Ending Note: In this blog on Graphs in Data Structures, we discussed the implementation of graphs, their operations, and real-life applications. Graphs are widely used, especially in networking, whether it is over the internet or for mapping out physical routes. Hopefully, now you have enough knowledge regarding a data structure with extensive applications.   Get one step closer to your dream job! We have books designed to help you clear your interview with flying colors, no matter which field you are in. These include HR Interview Questions You’ll Most Likely Be Asked (Third Edition) and Interview Questions You’ll Most Likely Be Asked.
Swagger

Swagger

by Vibrant Publishers on May 22, 2022
Introduction   In this world of constantly changing specifications and requirements, it’s not an easy task to be updated with your web services. Meticulous planning and documentation is a must to win the race. There are a few tools in the market that will help you do the job. We will discuss one such tool Swagger – an API documentation and development toolset.     Swagger Swagger is not just a platform, but a set of tools that will help you in the process of documenting and developing APIs. All these APIs follow the Open API specification standard. Their official website (www.swagger.io) describes three different toolsets:   Swagger Editor Swagger UI Swagger Codegen     Let us have a detailed look into these toolsets: Swagger Editor The swagger editor is responsible for designing and creating APIs based on the Open API specification. This editor can be installed on your computer and you can use it by logging into your swagger account. The editor has an intelligent auto-completion feature and has a visual editor feature where you can interact with the API during its creation time itself. Below is the screenshot from the swagger editor.         Swagger UI The swagger UI is another tool that will allow you to visually see and interact with the APIs. These APIs can be created by Swagger Editor or there is provision for uploading other APIs as well. Once the API is given for the tool, it will automatically generate the visual schema of your API. One of the very useful features of this tool is that it will also create documentation for your API. This is very useful while implementing the API in the back end or at the client-side. You can execute the APIs from there itself and test its functionality as well.     Swagger Codegen The Codegen tool takes care of defining the servers and client-side SDKs for the API that you are building. In this way you don’t need to worry about those aspects while developing and API. It supports over 20 programming languages and the creation of these stubs and SDKs are quite an easy task in Codgen. This will help you to focus only on developing APIs than thinking about the client and server implementations.     Opening an account in Swagger An account in Swagger is quite easy to start. Just visit the website www.swagger.io and sign up for an account by providing some basic details. You can also open an account providing your Github credentials. Once you create an account, you will be taken into the swagger hub dashboard. Below is the screenshot of the same.         From the dashboard, you can create or import your APIs. These APIs can be public or private according to your requirement. If you need other teams from swagger to see and collaborate on the API development, then they have to be public.   Once you are working on your APIs, the view will be changed to something as shown below:         Here, you have access to all the tools, the editor, UI, API docs and Codegen. This will make working with the entire process a seamless experience. Once the API is created, there will be an option to download the corresponding server stub and client SDK.     The default account is free which limits the use only to one user per team. If you need more paid options available. Summary Swagger is a set of tools that is targeted towards API development and documentation. The visual editor and codegen tools are some of the compelling features of this platform. As an API developer, it is worth trying swagger for your use cases.
POSTMAN

POSTMAN

by Vibrant Publishers on May 22, 2022
Introduction   APIs are the heart of every web service. When we deal with enterprise web services, there will be requirements for building thousands of APIs and it is not a simple task for the developer to create all of them manually. To resolve this problem there are many API tools available which will handle the complete lifecycle of an API. In this article, we will discuss such a tool called POSTMAN.   POSTMAN According to the official website (www.postman.com), postman is a collaboration platform for API development. Yes, it allows multiple people to come together and work on creating APIs for their applications. Apart from this, it’s a platform that will take care of all the processes involved in an API creation and management.   Use of POSTMAN in the API world As mentioned above the platform takes care of all requirements in API development. Let us take a look at some of the features the postman offers:   Accessibility: The platform is a hosted service and all you need to start using it is an account in postman. You can either use the browser to login to postman or use their desktop app and start working.   Organized: The APIs that you create with postman can be organized well with the feature called collections. They can be placed in folders and subfolders based on your projects.   Monitoring: Another exciting feature of the postman is the power to monitor the APIs that you have created. This will make your life much easier due to the instant notification of failures in any of your API services.   Testing: Nothing is solid unless you test it thoroughly. Postman offers both manual and automated testing of your APIs. You can define test cases and discuss with your team over the platform itself and put the tests into action. Also, these tests can be integrated into your CI/CD pipeline.   Versioning: You might want to provide APIs for different customers that utilize your services. Or there can be different releases of the same API whenever your application releases new versions. In these cases versioning your API is important. The postman platform offers better tagging and versioning provisions that you and your team can utilize effectively.   Getting started with POSTMAN Inorder to use postman, you need to sign up for an account first. Below is the login/signup screen of the postman.         Once you have an account you can download the postman application and start building your APIs, Test cases, etc.   In postman you have the choice of selecting different plans based on your requirement and pricing. Initially you will be provided with a Free Plan where there are certain limitations on the number of API that you can create and manage. When your requirements are high and there is a team working with you, then it’s good to choose a Team, Business or Enterprise plan as per your convenience. Below is the screenshot of different plans that Postman offers.        Using POSTMAN After successfully creating an account in postman, you can download the client application on your desktop and start working with it.   Once you login to your postman application, the dashboard will look like the one below.         Here you can create your APIs, put them into collections and you can perform monitoring, testing, etc. on your APIs. The UI is very intuitive and can be learned quite quickly.   You can create APIs based on JSON or YAML schema formats. Also, there is support for Open API, GraphQL and RAML schemas.   There is a very detailed tutorial on how to use postman on their official website: https://learning.postman.com. You must pay a visit here if you want to learn about the platform in detail.   Summary Postman is a platform to create and manage the lifecycle of  APIs. It is versatile and has a lot of features including team collaboration and monitoring etc. Using postman for your project will not only save time but will provide a clean and neat way of managing your APIs as well.
Java Collections – List, Set, Queue & Map

Java Collections – List, Set, Queue & Map

by Vibrant Publishers on May 22, 2022
Introduction: Most real-world applications work with collections like files, variables, records from files, or database result sets. The Java language provides a set of collection frameworks that you can use to create and manage various types of object collections. This blog describes the most common collection classes and how to start using them.     List: The list is an ordered collection, also known as a sequence. Because the list is organized, you have complete control over where list items are placed in a list. One thing to note here is that the Java list collection can only contain objects.   Declare a List:   A generic way       List listOfStrings = new ArrayList();     Using the Diamond operator       List<String> listOfStrings = new ArrayList<>();     Here the object type is not specified during ArrayList instantiation. This is because the type of the class to the right of the expression must match the type on the left. Note that the ArrayList java object is assigned to a variable of the list type. In Java programming, you can assign one type of variable to another, as long as the assigned variable is a superclass or interface implemented by the assignment variable.     List Methods:   To place a list item on a list using add() method       List<Integer> listOfIntegers = new ArrayList<>(); listOfIntegers.add(Integer.valueOf(100));     Note that the add() method adds elements to the end of the list.     To ask how big the list is, call size(). Here in the above example to get the current list size we will call,       listOfIntegers.size();     To retrieve an item from the list call get() and pass it the index of the item you want. For example if you want to get the first item from the listOfIntergers, then you will specify it like this:       listOfIntegers.get(0);     To go through all the records in a list you will iterate through the collection. You can do that easily because list implements the java.lang.Iterable.   When java.lang.Iterable is implemented by a collection it is called an Iterative variable collection. Here you will begin at one end and work on the collection, item by item until you’ve finished processing all the items.To get each item in a list, you can do the following:       for (Integer i : listOfIntegers) {   System.out.print(“Integer value is : ” + i); }     Set: Java Set is a collection construct that, by definition, contains unique elements — that is, no duplicates. The Java Set collection can only contain objects, and it is not an ordered list, which means it does not care about the order of the elements. Because the set is an interface, you cannot instantiate it directly.   Types of Java Set:   In Java, there are three implementations available for a set. They are HashSet, LinkedHashSet & TreeSet.   Declare a Set:       Set<Integer> setOfNumbers = new HashSet<>();     This Hashset is the most widely used version of a Set which gives a unique ordered list.       Set<String> setOfNames = new LinkedHashSet<>();     The only difference in LinkedHashSet with HasSet is that it orders the elements based on insertion order.       Set<Integer>setOfNumbers = new TreeSet<>();     In a TreeSet the ordering takes place based on the values of the inserted element. It follows a natural ordering by default.       Queue: Java Queue is a collection that works on FIFO (First In First Out) principle. The elements that are added first will be removed first from the queue. LinkedList and Priority Queue are the most common types of Queue implementations in Java.   Basic Queue Operations The common operations that can be performed in a queue are addition, deletion & iteration. Like other collections here also we can find out the queue size & length. The enqueue() method will add an element at the back of the queue and the dequeue() method will remove the item which is at the front of a given queue.         Map: The map is a convenient collection construct that you can use to associate one object (key) with another object (value). As you can imagine, the key of the Map must be unique and can be used later to retrieve values. Different implementations of the Map are HashMap, TreeMap, LinkedHashMap, etc. HashMap Java is the common Map type used by programmers.   Declare a Map: A Map can be declared using the Diamond Operator as given below:       Map<Integer, String> sampleMap = new HashMap<>();     Here, the Integer will be the ‘Key’ and Sting will be the ‘Value’. Basic Map Operations: The basic operations that can be performed in a Map are: Put the content in Map Get content from Map Get the key set for Map – use it to iterate.       Common Operations in Java 8 Collections: The collection framework in Java has many common operations that apply to all types of collections. They are summarized here.     End note: We have seen different types of collections in Java, their usage, and the main operations that each collection can perform. When dealing with large amounts of data, collections are the most commonly used data structure in the Java programming world. So it is important to understand their usage thoroughly to become a good Java developer. A Java developer will also need to understand Wrapper class, Streams, Enumeration, Autoboxing, and Threads.     Get one step closer to your dream job!     Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, queues, stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels. We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on April 3rd, 2020, and has been updated on January 8th, 2022.
Java Streams

Java Streams

by Vibrant Publishers on May 22, 2022
Introduction This blog talks about Java streams. The stream is a new addition in Java 8 that gives much power to Java while processing data. One thing to note here is that the stream mentioned here has nothing to do with Java I/O streams. This Java tutorial will focus on the Java 8 Streams in detail.Streams make processing a sequence of elements of the same datatype a cakewalk. It can take input in the form of any collections in Java, Arrays, Java I/O streams, etc. The interesting fact here is that you don’t need to iterate through these inputs to process each element. Automatic iterations are a built-in feature of the stream.In this blog, we will learn to create a stream and look at how stream operations in Java function. We will also take a look at improvements made by Java 9 on the stream interface.     Creating a Stream: Let’s see how a stream can be declared in Java 8. First, you will import java.util.stream class to use the Java 8 Stream feature. Now assuming we have an array ‘arrayOfCars’ as the input source to the stream, private static Car[] arrayOfCars = {   new Car(1, “Mercedes Benz”, 6000000.0),   new Car(2, “Porsche”, 6500000.0),   new Car(3, “Jaguar”, 7000000.0) };     The stream can be created as given below:       Stream.of(arrayOfCars);     If we are having a List, a stream can be created by calling the stream() method of that list itself.       sampleList.stream();     We can also create Stream from different objects of a collection by calling the Stream.of() method.       Stream.of (arrayOfCars[0], arrayOfCars[1], arrayOfCars[2]);     Or just by using the Stream.builder() method.       Stream.Builder carStreamBuilder = Stream.builder(); carStreamBuilder.accept( arrayOfCars[0] ); carStreamBuilder.accept( arrayOfCars[1] ); carStreamBuilder.accept( arrayOfCars[2] ); Stream carStream = carStreamBuilder.build();     Stream Operations: We have seen the different ways of creating a stream. Now let us have a look into various Stream operations.   The main use of stream is to perform a series of computations operations on its elements until it reaches the terminal operation.   Let’s look at an example:       List flowers = Arrays.asList(“Rose”, “Lilly”, “Aster”, “Buttercup”, “Clover”); flowers.stream().sorted().map(String::toUpperCase).forEach(System.out::println);     Here the stream performs sorting and mapping operations before iterating and printing each element in the stream. So we can understand that the stream operations are either intermediate or terminal. In intermediate operations, the output will be stream format itself.       Intermediate stream operations:         Terminal stream operations:         Improvement made by Java 9 on stream interfaces Java 9 has added a stream.ofNullable() method to the stream interface. It helps to create a stream with a value that may be null. So, if a non-null value is used in this method, it creates a stream with that method; otherwise, it creates an empty stream. Both the methods takewhile() and drpwhile() can be operated in Java stream. They are used to obtain a subset of the input stream. Java 9 has added an overloaded version of the Stream.iterate() method that accepts a predicate and terminates the stream when the condition specified is true.     End note: In this blog, we learned about the improvements made by Java 9 on stream interfaces. We also studied Java 8 stream features in detail—how to create a stream, stream operations, the uses of a stream, among other topics. Next, we learned about different stream intermediate operations and stream terminal operations and briefly looked at how Java stream provides a functional style of programming to Java besides its Object-oriented programming style. It is important to learn and understand this newly-added feature of a stream while studying Java programming.     Get one step closer to your dream job!   Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, queues, stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9 etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels. We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on April 2nd, 2020, and has been updated on January 7th, 2022.
Deep Dive into Java Operators

Deep Dive into Java Operators

by Vibrant Publishers on May 21, 2022
Introduction: One of the most basic requirements of a programming language is to perform mathematical operations. Java is an Object-Oriented Programming language. It provides a rich set of operators to manipulate variables. In this blog, let’s take a look at the different types of Java operators and a few examples of sample code.   We can classify the Java operators into the following groups: Arithmetic Operators Relational Operators Bit Operators Logical Operators Assignment Operator       Arithmetic Operators: Arithmetic operators are used for mathematical expressions. In the following table, you will find a list of all arithmetic operators, as well as their descriptions and example usage.   The examples in the table given below assume that the value of the integer variable X is 20 and the value of variable Y is 40.         One thing to note here is the number of operands involved in each case. While addition, subtraction, multiplication & Java modulo operators require two operands, the increment & decrement operators operate on a single operand.   The sample code given below will show all the use-cases of the above-mentioned operators in detail.     public class TestArithmaticOperators {   public static void main(String [] args){     //operand variable declaration and initializatoin.     int a = 5; int b = 22; int c = 16; int d = 33;     System.out.printIn(“a + b =” + (a + b));     System.out.printIn(“a – b =” + (a – b));     System.out.printIn(“a * b =” + (a * b));     System.out.printIn(“b / a =” + (b / a));     System.out.printIn(“b % a =” + (b % a));     System.out.printIn(“c % a =” + (c % a));     System.out.printIn(“a++ =” + (a++));     System.out.printIn(“a– =” + (a–));       // Check the difference between d ++ and ++ d     System.out.printIn(“d++ =” + (d++));     System.out.printIn(“++d =” + (++d));   } }   The following output is thus produced:     $ javac TestArithmeticOperators.java $java -Xmx128M -Xms16M TestArithmaticOpertors a + b = 27 a – b = -17 a * b = 110 b / a = 4 b % a = 2 c % a = 1 a++ = 5 a– = 6 d++ = 33 ++d = 35     Relational Operators: When we want to relate or compare two values, we use relational operators in our Java code. Two operands are expected for the operator as inputs.The table given below will provide a detailed description of relational operators in Java. Assuming the values of variables are X = 20  & Y = 40.         Let’s look at a sample code that uses the above operators:     public class code {   public static void main (String [] args){       // operand varibale declaration and initialization.     int a = 5; int b = 22; int c = 16; int d = 33;       System.out.println( “a == b =” + (a == b));     System.out.println( “a != b =” + (a != b));     System.out.println( “a < b =” + (a < b));     System.out.println( “a > a =” + (a > a));     System.out.println( “b <= a =” + (b <= a));     System.out.println( “c >= a =” + (c >= a));   } }     After successful compilation, the above code will produce the following result:     Output:   a == b = false a != b = true a < b = true a > a = false b <= a = false c >= a = true     Bit Operator: Java defines bit operators for integer types (int), long integers (short), short integers (char), and byte types (byte).   Bit operators operate on all bits and perform bit-level operations. There are four bitwise operations and three bitshift operations that bit operators perform.   For example, if we take Bitwise AND operator which is a bitwise operator, the operation will be as follows:   Suppose X = 60 and Y = 13; their binary (bit) representation would be as follows:       X = 0011 1100 Y = 0000 1101     Now when we perform a Bitwise AND operation, ie. if both bits are 1, the result is 1 else it’s 0. So here the result will be:       X & Y = 0000 1100     The following table gives a summary of the available bitwise operators and bit shift operators in Java:       Logical Operators: The following table lists the basic operations of logical operators, assuming Boolean variable X is true and variable Y is false.         Assignment Operator: We have discussed this section in detail here: Java Assignment Operator     Other Operators:   Instanceof Operator: This operator is used to manipulate an object instance and check whether the object is a class type or interface type. The instanceof Java operator uses the following format:       (Object reference-variable) instanceof (class / interface type)     If the object pointed to by the variable on the left side of the operator is an object of the class or interface on the right side of the operator, the result is true. Below is an example:         Conditional Operator (?:): The conditional operator in Java is also called ternary operator. The ternary Java operator has 3 operands. The conditional operator is used to determine the boolean values. The main purpose of this operator is to finalize which value among the two should be assigned to a variable.   The syntax would be:       variable test_variable = (expression)? value if true : value if false;       Java operator precedence: When multiple operators are used in one Java statement, it is important to understand which one will act first. To solve this confusion, the Java language has something known as ‘operator precedence.’ This will allow the highest priority operators to act first and then the other operators act, following the precedence order. It’s important to understand that in a multi-operator expression, different operator precedence can lead to very different results.     End Note: We have learned about the different types of operators in Java language that are used for performing various mathematical operations. We also saw examples of  their syntax.   Besides Java Operators, you must also be well-versed with the concepts of Flow control statements and Assertion, Java collections and Stream, Threading in Java, etc. You can take a look at our blogs on these topics for in-depth understanding.     Get one step closer to your dream job!   Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, Queues, Stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels. We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on March 31th, 2020, and has been updated on January 12th, 2022.
Java Inner Classes and String Handling

Java Inner Classes and String Handling

by Vibrant Publishers on May 20, 2022
Introduction This Java tutorial deals with Java Inner Classes & String handling. Java Inner Classes are an extraordinary feature in Java. In simple terms, an inner class is a class declared inside a class. In a broad sense, inner classes generally include these four types:   Member inner class Local inner class Anonymous inner class Static inner class     Creation of an Inner class: You will create an inner class just like how you do it for a general class. The only difference is that the inner class is always declared inside a class. An inner class can have private access, protected access, public access, and package access.   The code given below will explain the concept in detail:     Here we declared the inner class as ‘private’ which makes it accessible only inside the outer class. The java main class will access the inner class via an instance of its outer class.     Types of Inner class 1. Member inner class:   The member inner class is the most common inner class, and its definition will be inside another class, as given below:     class Circle {   double radius = 0;   public Circle(double radius) {     this.radius = radius;   }   class Draw {     public void drawSahpe() {       System.out.println(“draw shape”);     }   } }     In this way, the class Draw looks like a member of the class Circle, which is called the outer class. Member inner class has unconditional access to all member properties and member methods of outer classes, including private members and static members.   However, we must note that when a member’s inner class owns a member variable or method with the same name as an outer class, a hidden phenomenon occurs, i.e. a member of the member’s inner class is accessed by default. If you want to access the same member of an outer class, you need to access it in the following form:       OuterClass.this.MemberVariable; OuterClass.this.MemberMethod;     2. Local inner class:   A local inner class is a class that is defined in a method or scope, and the difference between it and the member inner class is that access to the local inner class is limited to the method or within the scope.       class Animals{   public Animals() {   } } class Mammals{   public Mammals(){ }     public Animals getHerbivorous(){       class Herbivorous extends Animals{         int number =0;       }     return new Herbivorous();   } }     The local inner classes, like a local variable within a method, cannot have public, protected, private, and static modifiers.     3. Anonymous inner class:   Anonymous inner classes as it sounds don’t have a name. Using anonymous inner classes when writing code for purposes like event monitoring etc. is not only convenient but also makes code easier to maintain.   For a normal class, any number of constructors are possible. But in the case of anonymous inner classes, we can’t have any constructors just because they don’t have any name which is a must for defining a constructor. Due to this most of the anonymous inner classes are used for interface callbacks.   Let’ see an example:     interfaceWeight {   int x = 81;   void getWeight(); } class AnonymousInnerClassDemo {   public static void main(String[] args) {     // WeightClass is implementation class of Weight interface     WeightClass obj=new WeightClass();     // calling getWeight() method implemented at WeightClass     obj.getWeight();   } } // WeightClass implement the methods of Weight Interface class WeightClass implements Weight {   @Override   public void getWeight()   {     // printing the wight     System.out.print(“Weight is “+x);   } }     Here we had to create a separate class ‘WeightClass’ to override methods of an interface. But if we use the anonymous inner class feature we can easily achieve the same result by including the following code block inside the class ‘AnonymousInnerClassDemo’.       Weight obj = new Weight() {   @Override   public void getWeight() {     System.out.println(“Weight is “+x);   } };     Here an object of ‘Weight’ is not created but an object of ‘WeightClass’ is created and copied in the entire class code as shown above.     4. Static inner class:   Static inner classes are also classes defined within another class, except that there is one more keyword ‘static’ in front of the class. Static inner class does not need to rely on external classes, which is somewhat similar to the static member properties of classes, and understandably, they cannot use non-static member variables or methods of external classes. Because, without objects of external classes, you can create objects for static inner classes. Allowing access to non-static members of an external class creates a contradiction because non-static members of the outer class must be attached to specific objects.       public class Test {   public static void main(String[] args) {     Outter.Inner inner = new Outer.Inner();   } }   class Outer {   public Outer() {   }   static class Inner {     public Inner() {     }   } }     String Handling in Java: All the string handling operations in Java are powered by the Java String class. The String class is also known as an immutable character sequence. It is located in the java.lang package, and the Java program imports all classes under java.lang package by default.   Java strings are Unicode character sequences, such as the string “Java” is composed of four Unicode characters ‘J’, ‘a’, ‘v’, ‘a’. We can use the scanner class in Java to parse an input string.   There are multiple operations on a string that we can do with the Java String class. The following code will provide an idea of string handling operations that we can do in Java:       public class StringHandlingDemo {   public static void main(String[] args) {       String stringX = “core Java”;     String stringY = “Core Java”;       //Extract the character with subscript 3     System.out.println( stringX.charAt(3));       //Find the length of a string     System.out.println( stringY.length());       //Compares two strings for equality     System.out.println( stringX.equals(stringY));       //Compare two strings (ignore case)     System.out.println( stringX.equalsIgnoreCase(stringY));       //Whether the string stringX contains word Java     System.out.println( stringX.indexOf(“Java”));       //Whether the string stringX contains word apple     System.out.println( stringX.indexOf(“apple”));       //Replace spaces in stringX with &     String result = stringX.replace(‘ ‘, ‘&’);     System.out.println(“the result is:” + result);       //Whether the string start with ‘core’     System.out.println( stringX.startsWith(“core”));       //Whether the string end with ‘Java’     System.out.println( stringX.endsWith(“Java”));       //Extract substring: from the beginning of the subscript 4 to the end of the string     result = stringX.substring(4);     System.out.println(result);       //Extract substring: subscript [4, 7) does not include 7     result = stringX.substring(4, 7);     System.out.println(result);       //String to Lowercase     result = stringX.toLowerCase();     System.out.println(result);       //String to uppercase     result = stringX.toUpperCase();     System.out.println(result);     String stringZ = ” How old are you!! “;       //Strip spaces from the beginning and end of a string.     Note: the space in the middle cannot be removed     result = stringZ.trim();     System.out.println(result);       //Because String is an immutable string, stringZ is unchanged     System.out.println(stringZ);       String stringZ1 = “Olympics”;     String stringZ2 = “Olympics”;     String stringZ3 = new String(“Winner”);       //String comparison operations     System.out.println( stringZ1 == stringZ2);     System.out.println( stringZ1 == stringZ3);     System.out.println( stringZ1.equals(stringZ3));       //String concatenation operation     System.out.println(stringZ1+” “+stringZ3);     } }     The output of the above code will be:       Output: e 9 false true 5 -1 the result is:core&Java true true Java Ja core java CORE JAVA How old are you!! How old are you!! true false false Olympics Winner     End note: We learned about inner classes and string handling in Java in this blog. In Java, inner classes are very useful while implementing features such as event listening, code abstraction, etc. It also makes the Java code more concise and reusable. String handling operations like java string split etc. are a must for any Java programmer and are the most commonly used feature in Java language.     Get one step closer to your dream job! Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, queues, stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.     We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on April 1st, 2020, and has been updated on January 6th, 2022.                  
All About Java Assignment Operators

All About Java Assignment Operators

by Vibrant Publishers on May 20, 2022
Like every other programming language, Java language also provides a feature called Operators. There are a few types of Operators exist in Java. In this java tutorial, we will focus our learning on Assignment operators. Assignment Operators exist for assigning a value to a variable in Java. The right side of the operator will be a value and the left side will be the variable that we created. So a general syntax would be as follows:       VARIABLE assignment operator VALUE ;     They are also known as Binary Operators because it requires two operands for it to work. Since there are many data types in java, one important thing here to remember is that the data type of the variable must match with the value that we assign. These variables can be a java array variable, a java object or any variable of a primitive data type.Now let’s look into multiple assignment operators that exist in Java language.     Simple Assignment Operator: As the name suggests this operator assigns values to variables.Syntax:       variable = value ;     Example:       string  name = “This is awesome!”;   A sample java code will be as given below:     Here if we examine the java class ‘SimpleAssignment’, the value assigned for java int variable ‘number’ is 10. And the java string variable ‘name’ has been assigned the value ‘This is awesome!”.Apart from simple assignment operations, java developers use this operator combined with other operators. This gives compound operators in the java code world. Let’s see those operators here:     += Assignment Operator: This is a combination of + and = java operators. So instead of writing,       number = number + 10;     we can combine the above statement in a java code as:     number += 10;   For Example:The given java code blocks for java compiler will be the same and will produce the same output ie. 13.2       float java = 1.00; java +=12.2; float java = 1.00; java = java + 12.2;     -= Assignment Operator: Similar to the compound statement that we saw above, here also we combine ‘-’ and ‘=’ java operators. So here the value on the right side is subtracted from the initial value of the variable and it is assigned to the variable itself.Syntax:       numberA -= numberB ;     Example:       numberA -= 10; this means numberA = numberA -10;     *= Assignment Operator: In this case, we combine ‘*’ and ‘=’ java operators. So here the value on the right side is multiplied with the initial value of the variable and it is assigned to the variable itself.Syntax:       numberA *= numberB ;     Example:       numberA *= 10; this means numberA = numberA *10;     /= Assignment Operator: Just like the above cases,  here we combine ‘/’ and ‘=’ java operators. In this operation, the initial value of the variable on the left side is divided with the value on the right side and the resulting quotient is assigned to the left side variable itself.Syntax:       numberA /= numberB ;     Example:       numberA /= 10; this means numberA = numberA /10;     Summary: The following table provides a summary of the Assignment Operators in Java Language.     For a java developer, it’s important to understand the usage of Operators. If you want to learn java programming, then you must know these concepts well.
Declarations & Access Control in Java

Declarations & Access Control in Java

by Vibrant Publishers on May 20, 2022
Introduction In Java, creating a variable is also referred to as declaring a variable. To create a variable in Java, you must specify its type and name. This blog will talk about how declaration in Java works and the steps to create (declare) a variable and its different types such as Local variables, Instance variables. We will also learn constructors, Java access modifiers.     Source File Declaration: A Java file can have only one public class. If one source file contains a public class, the Java filename should be the public class name. Also, a Java source file can only have one package statement and unlimited import statements. The package statement (if any) must be the primary (non-comment) line during a source file and the import statements (if any) must come after the package and before the category declaration.     Identifiers Declaration: Identifiers in Java can begin with a letter, an underscore, or a currency character. Other types of naming are not allowed. They can be of any length. Only in the case of JavaBeans methods, they must be named using CamelCase, and counting on the method’s purpose, must start with set, get, is, add, or remove. In Java, we have variables, methods, classes, packages, and interfaces as identifiers.     Local Variables: The scope of local variables will be only within the given method or class. These variables should be initialized during declaration. Access modifiers cannot be applied to local variables. A local variable declaration will be as shown below:       public static void main(String[] args) {     String helloMessage;     helloMessage = “Hello, World!”;     System.out.println(helloMessage); }     Here String helloMessage; is a local variable declaration and its initialization is followed in the next line.     Instance Variables: Instance variables are values that can be defined inside the class but outside the methods and begin to live when the class is defined.Here, unlike local variables, we don’t need to assign initial values. It can be defined as public, private, and protected. It can be defined as a final. It cannot be defined as abstract and static. An example can be shown as below:       class Page {   public String pageName;   // instance variable with public access   private int pageNumber;   // instance variable with private access }     Here the declaration String pageName is an instance variable.     Constructors: We use the constructor to create new objects. Each class is built by the compiler, even if we do not create a constructor defined in itself. constructors can take arguments, including methods and variable arguments. They must have the same name as the name of the class in which it is defined. They can be defined as public, protected, or private. Static cannot be defined because it has a responsibility to create objects. Since it cannot be overridden, it cannot be defined as final and abstract. When the constructor is overloaded, the compiler does not define the default constructor, so we have to define it. The constructor creation order is from bottom to top in the inheritance tree.     Static: It allows invoking the variable and method that it defines without the need for any object. Abstract and static cannot be defined together, because the method presented as static can be called without creating objects and by giving parameters directly. The abstract is called to override a method. Abstract and static cannot be used together because static has different purposes in this respect.     ENUM: It is a structure that allows a variable to be constrained to be predefined by one value. With Enum’s getValues method, we can reach all values of enums. This is the most effective way to define and use constants in our Java program.     Features of Java Class Modifiers (non-access): Classes can be defined as final, abstract, or strictfp. Classes cannot be defined as both final and abstract. Subclasses of the final classes cannot be created. Instances of abstract classes are not created. Even if there is one abstract method in a class, this class should also be defined as abstract. The abstract class can contain both the non-abstract method and abstract method, or it may not contain any abstract method. All abstract methods should be overridden by the first concrete (non-abstract) class that extends the abstract class.     Java Class Access Modifiers: Access modifiers are an important part of a declaration that can be accessed outside the class or package in which it is made. Access modifiers enable you to decide whether a declaration is limited to a particular class, a class including its subclasses, a package, or if it is freely accessible. Java language has four access modifiers: public, protected, and private.     Public Enables a class or interfaces to be located outside of its package. It also permits a variable, method, or constructor to be located anywhere its class may be accessed. Protected: Enables a variable, method, or constructor to be accessed by classes or interfaces of the same package or by subclasses of the class in which it is declared. Private: Prevents a variable, method, or constructor from being accessed only from within the class in which it is declared. Default: The default access occurs when none of the above access specifiers are specified. In such a case, the member is accessible within the package but not without the Java package.     End Note: In this blog, we talked about declarations of variables, constructors, and Java class modifiers. We also looked at the features of class modifiers and their types.   Do you want to know more about topics like Java collections, Java streams, Java Inner Classes, and many more aspects of Java? For in-depth information on these topics, you can check out our series of blogs here.     Get one step closer to your dream job!   Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, queues, stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.  We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on March 29th, 2020, and has been updated on January 5th, 2022.
Object Oriented Concepts in Java

Object Oriented Concepts in Java

by Vibrant Publishers on May 20, 2022
Object-oriented programming is all about using the real-world concept of an object in the programming world. Here real-world entities like Inheritance, Polymorphism, Binding, etc. come into the picture. We will be covering the following OOP Concepts in this article: Polymorphism Inheritance Encapsulation Abstraction Before going into the details, let’s find out some of the benefits of using the OOP concept in programming.     Benefits of Object Oriented Programming in Java: Reusability: OOP principles like Inheritance, Composition, and Polymorphism help in reusing existing code. In OOP, you will never code the same block again in your program, rather reuse the existing block. Extensibility: Code written using OOP principles like Inheritance makes the code extensible. Security: OOP principles like Encapsulation helps to keep the data and the code operating on that data secure. Simplicity: Java classes represent real world objects. This makes the code very easy to understand. For example, in real life a bus is an object which has attributes like color, weight, height, etc., and methods such as drive and break. Maintainability: Code written using OOP concepts is easier to maintain.   Now, let’s look into the main OOP concepts:       Polymorphism: Polymorphism is the ability to use the same interface to execute different interface codes. Java achieves Polymorphism via method overloading and overriding. In Java, the language can differentiate between entities having the same name efficiently.Consider the following example:         class multiplicationFunction {   // method with 2 parameters   static int multiply(int a, int b)   {     return a * b;   }     // method with the same name but 3 parameters     static int multiply(int a, int b, int c)   {     return a * b * c;   }   }   class Main {     public static void main(String[] args)   {     System.out.println( multiplicationFunction.multiply(2, 6) );     System.out.println( multiplicationFunction.multiply(6, 4, 2) );   } }     Though all the methods have the same name, the program will compile successfully and will provide output.     There are two kinds of methods in Java: Compile-time polymorphism, also known as static binding or method overloading Run-time polymorphism, also known as dynamic binding or method overriding   Inheritance: Like we see inheritance in the real-world, Java classes can also share or inherit properties or methods of other classes. This way, we can reuse the code once written in many other places in a program. The class that inherits properties from another class is called Derived Class or Child Class. The class that shares its properties with another class is called the Base Class or Parent Class.   In Java, we can use this feature by inserting ‘extends’ keyword:       public class Vehicle {     String vehicleType;     String vehicleModel;     void mileage() { }   }   public class Car extends Vehicle {   }     The inherited Car Class can use all the variables and methods of Vehicle Class.     Encapsulation: Encapsulation refers to keeping objects with their methods in one single place. It also protects the integrity of the information– prevents it from being needlessly altered by restricting access to the data, preferably by hiding it from outside elements. Encapsulation is usually confused with data abstraction, but they are different concepts entirely. Data hiding, or data abstraction, has more to do with access specifiers. A programmer must first encapsulate the information; only then he can take steps to cover it.   Procedural programs, for example, are not encapsulated. The procedures are grouped separately from the data. In OOP, the given data is usually grouped along with side methods that operate upon the information.   In Java, encapsulation is built-in and whenever you create a class, this principle is followed naturally.     Abstraction: This feature in OOP aims to hide the complexity from the users and provide them with relevant information only. There are abstract classes or interfaces available in Java through which we can provide only the required information to the users, hiding all unwanted code.   There are two types of abstractions commonly used in Java: Data Abstraction Control Abstraction   Consider the following code block:     abstract class Animal{     public abstract void animalSound();     public void sleep() {       System.out.println(“Zzz”);     }   }     Here you won’t be able to create an object for Animal class. Animal animalObj = new Animal(); will generate error.   If we want to access an abstract class, it must be inherited from another class. So, in the above example:       class Lion extends Animal {     public void animalSound() {     }   }     We create a class Lion that extends Animal class, now we can create an object for the Lion class: Lions lionObject = new Lion();     End Note: In this blog, we looked at Object-Oriented Programming (OOP) concepts. We also saw the benefits of this type of programming in Java. Some of the main OOP concepts like polymorphism, encapsulation, and inheritance, among other concepts, were briefly touched upon.   So, it’s important to understand these concepts in-depth to make use of the power of Object-Oriented Programming. Happy Learning!!     Get one step closer to your dream job!   Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, queues, stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.  We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on March 28th, 2020, and has been updated on January 19th, 2022.       
Multithreading in Java

Multithreading in Java

by Vibrant Publishers on May 20, 2022
Introduction: In this blog, we will look at Threading in Java.We will cover the following topics in detail: Introduction to Java Thread The Java Thread Model Creating & Using a Java Thread Multithreading in Java     Introduction to Java Thread When it comes to programming languages, Java is the only one that provides built-in support for multithreading. In a multithreaded programming environment, two or more programs can run concurrently. Each of these programs are called threads and are considered as the smallest unit of processing. During run-time, the threads in a given program exist in a common memory space and therefore share both data & code. However, each thread maintains an exclusive execution path. This enables the language to perform multitasking without having the heaviness of multiprocessing.   Java achieves threading through java.lang.Thread class. These are lightweight in nature and can run concurrently in synchronous or asynchronous mode.   We can classify threads into User-defined threads and Daemon threads. (1) User-defined threads are those that are created programmatically by the user. These are high priority threads. The Java Thread Model (JVM) waits for these threads to finish. (2) Daemon threads are created by the JVM.     The Java Thread Model The Thread Model in Java has got a life-cycle with different stages as shown below:     New: A thread is said to be in a new state just after we initiate the instance of a new thread class. Runnable: A thread becomes runnable after it gets started. Running: When the Java thread is executing its task, it is said to be in a running state. Suspended: When a thread is in this stage, its activity is temporarily suspended. This state can be resumed from the part where it left off. Blocked: When a Java thread is waiting for a resource, we can keep it in the blocked state. Terminated: This means that the execution of a thread is halted immediately at any given time. Once the thread is terminated, it cannot be resumed.     The following are some of the important Thread Methods that are used in managing a thread object.   public void start(): The start() starts the thread in a separate path of execution, after which it invokes the run() method, on this thread object. public void run(): The run() method is invoked on a thread object that is instantiated using a separate runnable target. public static void yield(): This method is used by a currently running thread to yield to any other threads of the same priority that are waiting to be scheduled. public final void join(long millisecond): This method is invoked by a thread on a second thread, causing the first thread to be blocked until the second thread is terminated. public static void sleep(long millisecond): This method causes the currently running thread to be blocked for the specified number of milliseconds. public static void setName(): The setName() method sets the name of the thread to the value specified. public static void is Alive(): This method returns a boolean value that indicates whether the current thread is alive or not. public static void setPriority: The setPriority method changes the priority of the thread to the specified value.     Creating a Java Thread In Java, two ways you can create a thread: Through implementing the Runnable Interface. Through extending the Thread Class.     1. Runnable Interface A thread created by implementing Runnable Interface will execute the code defined in the public method run(). Sample code will look like the one given below:       class SampleRunnable implements Runnable {   private String threadName = “Runnable Interface   Demo Thread”;   public void run() {     System.out.println(“Running ” + threadName );   } }     Given code will create a thread that will print Runnable Interface Demo Thread during execution. Before that, we need to start the created thread, and for that, we pass an instance of our SampleRunnable Class into the constructor of the Thread class.     The code will look like this:     Thread sampleThread = new Thread(new SampleRunnable ()); sampleThread.start();     2. Extending Thread Class In this method we create a class that extends the Thread class and we override the existing run() method in it. Here in order to run the thread, we will create an instance of the class that extended the Thread Class.     public class SampleThreadClass extends Thread {   private String threadName = “Thread Class Demo”;   public void run() {     System.out.println(“Running ” + threadName );   } }     Inorder to run the above thread, we will do the following:     SampleThreadClass threadClass = new SampleThreadClass (); threadClass.start();     This code when it gets executed will call run() and will print Thread Class Demo.     Multithreading in Java When we have more than one thread in a Java program, it becomes a multithreaded program and at this point, there are some more things we have to be aware of. There may be a scenario where multiple threads try to access the same resource and finally they might produce a hang. In this kind of scenario, there is a need to synchronize the action of multiple threads and make sure that only one thread can access the resource at a given point in time. This is implemented using a concept called monitors. Java has a provision for creating threads and synchronizing their tasks by using synchronized blocks. This way, we can efficiently manage the resource allocation to multiple threads during execution. But at the same time, synchronization sometimes produces a case called dead-lock. It’s a situation where one thread waits for the second one to finish object lock, but the latter is waiting to release the object lock of the first thread. By structuring the code properly we can avoid such dead-lock situations.     End note: We studied threads in Java. We also looked at two types of Java thread models and understood how the Java Thread Model (JVM) works. Additionally, we learned how to create the Java thread by using two methods, i.e. runnable interface and through extending the Thread class. Overall, we learned how the concept of thread is implemented in Java, and last but not the least, how multithreading works in Java.     Get one step closer to your dream job!     Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, Queues, Stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.     We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on March 27th, 2020, and has been updated on January 4th,  2022.
Wrapper Classes, Garbage Collection and Exception Handling in Java

Wrapper Classes, Garbage Collection and Exception Handling in Java

by Vibrant Publishers on May 20, 2022
Introduction In this blog, we will look at some features of wrapper classes and how garbage collection works in Java. Java is an Object-oriented language. The primitive data types in Java are not objects. Sometimes, we will need object equivalents of the primitive types. An example is when we use collections. It contains only certain objects, not primitive types.To solve such issues, Java introduced Wrapper class that wraps primitive types into objects and each primitive has corresponding wrapper class.Garbage collection is the process of freeing memory allocated to an object that is no longer used. During run-time, when  objects are created, the Java virtual machine (JVM) allocates some memory to hold the object. The JVM uses the Mark and sweep algorithm internally for garbage collection.     Wrapper Classes: As the name suggests, wrap means to cover something, and wrapper class in Java does the same. Wrapper classes in Java are objects encapsulating Java primitive data types.Each primitive data type has a corresponding wrapper in Java, as shown below:       Primitive Data Type boolean byte short char int long float double Wrapper Class Boolean Byte Short Char Int Long Float Double     Since all these classes are part of java.lang.package we don’t need to import them explicitly.But if you are wondering about its use case, you must know that in Java, all generic classes only work with objects and they do not support primitives. So when we have to work with primitive data types, we have to convert them into wrapper objects. That’s where wrapper classes come in handy.This conversion can be done by either using a constructor or by using the static factory methods of a wrapper class. The example below shows how to convert an int value to Integer object in Java:       int intValue =1;   Integer object = new Integer.valueOf( intValue);     Here, the valueOf() method will return an object with specified intValue.Similarly, all other conversions can be done with corresponding wrapper classes.     Garbage Collection: It is a memory management technique that the Java language has implemented by scanning heap memory (where Java objects are created) for unused objects and deleting them to free up the heap memory space.Garbage Collection is an automatic process that each JVM implements through threads called Garbage Collectors to eliminate any memory leaks. There are four types of garbage collectors in Java namely Serial, Parallel, CMS & G1 Garbage Collectors depending upon which part of heap memory you are doing the garbage collection on. We can make use of them based on our requirements.The basic process of garbage collection is to first identify and mark the unreferenced objects that are ready. The next step is to delete the marked objects. Sometimes, memory compaction is performed to arrange remaining objects in a contiguous block at the start of the heap. Before the deletion of an object, the garbage collection thread invokes the finalize() method of that object that allows all types of cleanup.The recommended way of calling a garbage collector programmatically is given below:       public static void gc() {       Runtime.getRuntime().gc();   }     One important thing to note here is that Java is non-deterministic in garbage collection, i.e., there is no way to predict when it will occur at run-time. JVM will trigger the garbage collection automatically based on the heap size. We can include some hints for garbage collection in JVM using System.gc() or Runtime.gc() methods. But we can not force the garbage collection process.     Exception Handling: An exception is an event that disrupts the normal execution of a program. Java uses exceptions to handle errors and other abnormal events during the execution of the program. The master class for this is class ’throwable’ in Java.When we discuss exceptions, we must understand that they are different from Errors. Errors are impossible to recover while exceptions are recoverable. Errors only occur at run-time and are caused by the Java run time environment, while exceptions can occur at compile time as well and the cause is the application itself, not the Java compiler.In Java, exceptions that occur at compile time are called checked exceptions and run-time exceptions are called unchecked exceptions.A basic example of an exception is shown below:     class SampleException{     public static void main(String args[]){       try {         //code that raise exception      }      catch (Exception e) {       // rest of the program }      }   }     There are many built-in exceptions in Java.Users can also define their exceptions by extending the Exception class.A new exception is thrown using the ‘throw’ or ‘throws’ keywords. If there is only one exception to throw we use the throw keyword:       public static void findRecord() throws IOException {       throw new IOException(“Unable to find record”);   }     The main difference between throw & throws are:       Throw Used to explicitly throw an exception. The throw is followed by an instance. Used inside a function. We cannot throw multiple exceptions. It cannot be used to propagate checked Throws Used to explicitly declare an exception. Throws followed by a class. Used with function signature. It can declare multiple exceptions.  It can be used to propagate checked exceptions.     An exception is handled using a statement in Java.The statements for monitoring errors are put within the try block. The code that needs to be executed in case an exception occurs should be placed within the catch block. In the finally block, you need to put in code that is executed irrespective of whether an exception is thrown or not.   try : A try block is where you will put the code that may raise exceptions. catch: The catch block is used to handle a given exception. finally: The keyword ‘finally’ is used to define the code block that must be executed irrespective of the occurrence of an exception.   A sample code would be:       class ExceptionSample{    public static void main(String args[]){     try{      int intData= 0/3;      System.out.println(intData);     }     catch(NullPointerException e){      throw new NullPointerException(“There was an error in the calculation.”);     }     finally {System.out.println(“ We have completed the exception handling”);}     //remaining code    }   }     End note: In this blog we have studied wrapper classes and the necessity of it. We also looked at garbage collection and how it functions, as well as its types and the part of JVM in which it is stored. We briefly explored the process of Exception handling and its statement, which handles exceptional errors in the code.     Get one step closer to your dream job!   Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, Queues, Stacks, and Declarations and access control in Java, Exceptional Handling, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.   We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on March 26th, 2020 and has been updated on January 11th, 2022.    
Flow Control and Assertions in Java

Flow Control and Assertions in Java

by Vibrant Publishers on May 20, 2022
Introduction: This blog will elaborate on the concept of flow control statements in Java. We will also take a look at the syntax of flow control statements, their flow charts, and assertions which are used to validate the correctness of assumptions in Java programming.      Flow Control: Flow Controls are a set of statements in Java that govern the order in which statements are executed in a running program. Flow Control statements are mainly classified into three categories: Selection statements: if, if-else & switch Iteration Iteration statements: while, do-while & for. Transfer statements: break, continue, return.     1. Selection Statements: Selection statements are used in Java for selecting alternative actions during the execution of a program. The selection statements are:   Simple if statement:   The simple if statement follows the syntax below:     if (condition): {   statement   }     These kinds of statements are useful when we have to decide whether an action is to be performed or not, based on a condition. The condition may be an integer value or float value and the action to be performed is based on this condition, which can be in the form of a single statement or a code block. They must evaluate a boolean value. That is, it should return either a True or False.   The code flow is illustrated in the given activity diagram.     The if-else statement:   The Java if-else statement is used to decide between two actions, based on a condition. It has the following syntax:       if (condition1): { Statement1 } else { Statement2 }     The code flow is illustrated in the given activity diagram.   The switch statement:   The switch statement can be used to choose one action among many alternative actions, based on the value of a switch expression. In general, a switch case will be written as follows:       switch ( ) { case label 1 : case label 2 : … case label n : default: }     2. Iteration Statements: Iteration statements or looping structures that allow a block of code to execute repeatedly. Java provides three iteration mechanisms for loop construction. They are as follows:   The while statement:   The syntax will be as follows:       while (condition);   {   statement   }     Here the condition of the while loop is evaluated first and then if the resulting value is True, the loop gets executed. On the other hand, if the condition is False, the loop is terminated and execution continues with the statement immediately following the loop block.     The do-while statement:   The syntax is as follows:       do { statement } while(Condition );     Example:       int i=1; do{   System.out.println(i);    i++; }while(i<=10);     The above code block will print numbers 1 to 10 in a row. One thing to note here is that do loop is executed whenever the condition of the while loop is true. The activity diagram given below will explain the control flow:     The for( ; ; ) statement:   It is mostly used for counter controlled loops where we already know the number of iterations needed to be made.   The syntax is as given below:       for (int;condition;increment/decrement)  // code to be executed     The for( ; ; ) statement usually declares and initializes a loop variable that controls the execution of the loop. The output of the loop must be a boolean value. If the output is true, the loop body is executed, otherwise, execution continues with the statement following the for(;;) loop. After each iteration, the loop is executed, which will usually modify the value of the loop variable to ensure a loop termination. Note that this is only executed once on entry to the loop.     The enhanced for(:) statement:   This for(:) statement, otherwise called the enhanced for loop structure, is convenient when we need to iterate over an array or a collection, especially when some operation needs to be performed on each element of the array or collection.The syntax for an enhanced for(:) loop is shown below.       for (data_type item : collection)       3. Transfer Statements: Transfer statements are used for transferring control in a Java program. They are:   The break statement:   The break statement terminates loops ( for(;;), for(:), while, do-while) and switch statements, and it transfers control to the closest enclosing code block.       while (condition) {    if (condition ): {   statement     break;    }   }     The given diagram will explain the control flow in break statements:     The continue statement:   We use the continue statement to skip the current iteration of a loop. The diagram given here explains the control flow of the continue statement in a code block.     The return statement:   The return statement stops the execution of a method and transfers control back to the calling code. If the parent function is void, the return statement does not carry any value. For a non-void method, the return statement will always have a return value.     Assertions: In Java, assertions are used to validate the correctness of assumptions being made in the code. They are used along with boolean expressions and are mainly used for testing purposes.   We use the assert keyword for assertion in Java. This feature was introduced in Java JDK 1.4 onwards. Before that, it was only an identifier. There are two ways we can use assert keyword, namely:   assert expression; assert expression1: expression2;   By default, this feature will be disabled in Java. You have to enable it by typing the following command:       java -ea for all non-system classes   java -ea or for particular named packages & classes alone.     Example:       public void setup() {                         Connection conn = getConnection();                         assert conn != null : “Connection is null”;                         }     The above code will automatically throw an assertion error Exception in thread “main” java.lang.AssertionError: Connection is null.   Using assertions we can remove if & throw statements with a single assert statement.     End note: In this blog, we looked briefly at Flow control statements and its syntax and when to use them while coding. We also learned about assertions in Java.     Get one step closer to your dream job!   Prepare for your Java programming interview with Core Java Questions You’ll Most Likely Be Asked. This book has a whopping 290 technical interview questions on Java Collections: List, Queues, Stacks, and Declarations and access control in Java, Exceptional Handling, Wrapper class, Java Operators, Java 8 and Java 9, etc. and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.   We’ve updated our blogs to reflect the latest changes in Java technologies. This blog was previously uploaded on March 25th, 2020, and has been updated on January 15th, 2022.    
Collision Resolution with Hashing

Collision Resolution with Hashing

by Vibrant Publishers on May 19, 2022
While many hash functions exist, new programmers need clarification about which one to choose. There’s no formula available for choosing the right hash function. Clustering or collision is the most common problem in hash functions and must be addressed appropriately.     Collision Resolution Techniques: When one or more hash values compete with a single hash table slot, collisions occur. To resolve this, the next available empty slot is assigned to the current hash value. The most common methods are open addressing, chaining, probabilistic hashing, perfect hashing and coalesced hashing techniques.Let’s understand them in more detail:     a) Chaining: This technique implements a linked list and is the most popular out of all the collision resolution techniques. Below is an example of a chaining process.     Since one slot here has 3 elements – {50, 85, 92}, a linked list is assigned to include the other 2 items {85, 92}. When you use the chaining technique, the insertion or deletion of items with the hash table is fairly simple and high-performing. Likewise, a chain hash table inherits the pros and cons of a linked list. Alternatively, chaining can use dynamic arrays instead of linked lists.     b) Open Addressing: This technique depends on space usage and can be done with linear or quadratic probing techniques. As the name says, this technique tries to find an available slot to store the record. It can be done in one of the 3 ways –   Linear probing – Here, the next probe interval is fixed to 1. It supports the best caching but miserably fails at clustering. Quadratic probing – the probe distance is calculated based on the quadratic equation. This is considerably a better option as it balances clustering and caching. Double hashing – Here, the probing interval is fixed for each record by a second hashing function. This technique has poor cache performance although it does not have any clustering issues. Below are some of the hashing techniques that can help in resolving collision.     c) Probabilistic hashing: This is memory-based hashing that implements caching. When a collision occurs, either the old record is replaced by the new or the new record may be dropped. Although this scenario has a risk of losing data, it is still preferred due to its ease of implementation and high performance.     d) Perfect hashing: When the slots are uniquely mapped, the chances of collision are minimal. However, it can be done where there is a lot of spare memory.     e) Coalesced hashing: This technique is a combo of open address and chaining methods. A chain of items is are stored in the table when there is a collision. The next available table space is used to store the items to prevent collision.    To ace your interview, you can explore related topics. Learn about other data structures through the following blogs on our website:  The Basics of Stack Data Stucture  Trees Basics of hash data structures Know your Queue Data Structures in 60 seconds These provide insights into the various types of data structures and will give you better understanding.Pave the route to your dream job by preparing for your interview with questions from our Job Interview Questions Book Series These provide a comprehensive list of questions for your interview regardless of your area of expertise. These books include: HR Interview Questions You’ll Most Likely Be Asked (Third Edition)  Innovative Interview Questions You’ll Most Likely Be Asked Leadership Interview Questions You’ll Most Likely Be Asked You can find them on our website!
Basics of Hash Data Structures

Basics of Hash Data Structures

by Vibrant Publishers on May 19, 2022
A Hash Table is an abstract data structure that stores data in the key-value form. Here the key is the index and value is the actual data. Irrespective of the size of the data, accessing it is relatively fast. Hence, hashing is used in search algorithms.     Hash Table: A hash table uses an array to store data and hashing technique is used for generating an index. A simple formula for hashing is (key)%(size), where the key is the element’s key from the key-value pair and size is the hash table size. This is done for mapping an item when storing in the hash table.   The below illustration shows how hashing works.   As shown above, we calculate the slot where the item can be stored using % modulo of the hash table size.     Applications: Hash tables make indexing faster, because of which they are used to store and search large volumes of data. Hash tables have 4 major functions: put, value pair and get, contains, and remove. Caching is a real-time example of where hashing is used.     Hash tables are also used to store relational data.     Issues with Hashing: Although hash tables are used in major search programs, they sometimes take a lot of time especially when they need to execute a lot of hash functions. Data is randomly distributed, so the index has to be searched first. There are many complex hash functions, hence they may be prone to errors. An unwarranted collision occurs if a function is poorly coded.     Types of Hashing: Probabilistic hashing: Caching is implemented by hashing in the memory. The old record is either replaced by a new record or the new record is ignored whenever there is a collision. Perfect Hashing: When the number of items and the storage space is constant, the slots are uniquely mapped. This method is desirable, but not always guaranteed. This ideal scenario can be achieved by mapping a large hash table with unique spots provided there is a lot of free memory. This technique is easy to implement and has a lesser collision probability. It may adversely affect the performance as the entire hash table needs to be traversed for search. Coalesced Hashing: Whenever a collision occurs, the objects are stored in the same table as the chain rather than being stored in a separate table. Extensible Hashing: Here the table bucket can be resized depending on the items. This is suitable for implementing time-sensitive applications. Whenever a resize is needed, we can either recreate a bucket or a new bit is then added to the current index with all the additional items and mapping is updated.   For example:     Linear hashing: Whenever an overflow occurs, the hash table is resized and the table will be rehashed. The overflow items are added to the bucket. When the number of overflow items changes, the table is rehashed.   A new slot is then allocated to each item.   To ace your interview, you can explore related topics. Learn about other data structures through the following blogs on our website:   The Basics of Stack Data Structure  Trees Know your Queue Data Structures in 60 seconds Basics of hash data structures These provide insights into the various types of data structures and will give you better understanding.   Get one step closer to your dream job!   Under our Job Interview Questions book series, we have books designed to help you clear your interview with flying colors, no matter which field you are in. These include:  HR Interview Questions You’ll Most Likely Be Asked (Third Edition)  Innovative Interview Questions You’ll Most Likely Be Asked Leadership Interview Questions You’ll Most Likely Be Asked Check them out on our website!