# Blockchain Essentials You Always Wanted To Know

3899

In stock

Share

3899

Bibliographic Details

More details coming soon.....

Description

More details coming soon.....

About the author

More details coming soon.....

About the series

More details coming soon.....

Editorial Reviews

More details coming soon.....

Book Resources

More details coming soon.....

## Recently viewed products

## Blog posts

Blogs On Programming

by Vibrant Publishers on May 22, 2022

In this blog, we will discuss the Tree as a part of Graph, as well as Tree traversal algorithms and Special types of trees. The tree data structure is used to present a hierarchical relationship between the data. The elements of the tree are called nodes. Starting from the root (initial) node, each node has a certain number of children. The nodes higher in the hierarchy are called the parent nodes and the children are called the child nodes. The nodes having the same parent are called siblings. The node with no child node is called a leaf node. The level of a node is the depth of the tree from that node to the root node. Since the tree is a hierarchical structure, the child node at one level can act as the parent node for the nodes at the next level.
The tree in which each node has a maximum of 2 child nodes is called a Binary tree. Nodes of a binary tree can be represented using a structure or a class consisting of node information and a pointer each for the left and the right child node. The process of accessing the elements of a tree is called tree traversal. There are 4 types of tree traversal algorithms:
1. Inorder traversal:
For Inorder traversal, the left subtree of a node is accessed first followed by the value of the node and then the right subtree of the node. For example, consider the following tree.
The Inorder traversal of the tree in Fig 1 will be 8,4,9,2,10,5,11,1,12,6,13,3,14,7,15.
2. Preorder traversal:
For Preorder traversal, the value of the node is accessed first followed by the left subtree of a node and then the right subtree of the node.
The Preorder traversal of the tree in Fig 1 will be 1,2,4,8,8,5,10,11,3,6,12,13,7,14,15.
3. Postorder traversal:
For Postorder traversal, the left subtree of the node is accessed first followed by the right subtree of a node, and then the value of the node.
The Postorder traversal of the tree in Fig 1 will be 8,9,4,10,11,5,12,13,6,14,15,7,3,1.
4. Level order traversal:
For Level order traversal, starting from the root node, the nodes at a single level are accessed first before moving to the next level. Level order traversal is also Breadth-first traversal (BSF).
The Level order traversal of the tree in Fig 1 will be 1,2,3,4,5,6,7,8,9,10,11,12,13,14,15.
Binary Search Tree can also be converted to a doubly linked list such that the nodes of the doubly linked list are placed according to the inorder traversal of the tree.
Special types of trees:
1. Binary Search Trees:
Binary search tree (BST) is a special type of tree in which the nodes are sorted according to the Inorder traversal. The search time complexity in a binary tree is O(log n). Insertion in a BST is achieved by moving to the left subtree if the value of the node to be inserted is lower than the current node or by moving to the right subtree if the value of the node to be inserted is greater than the current node. This process is repeated until a leaf node is found.
2. AVL Trees:
AVL trees are BSTs in which for each node, the difference between the max level of the left subtree and the max level of the right subtree is not more than 1. AVL trees are also called the self-balancing BST’s.
3. Red Black Trees:
Red Black trees are BSTs in which the nodes are colored either black or red with the root node always being black. No adjacent nodes in a Red Black tree can be red and for each node, any path to a leaf node has the same number of black nodes. Like AVL trees, Red Black trees are also self-balancing BSTs.
4. Trie:
Trie is a special type of independent data structure which is in the form of a tree. It is generally used for string processing. Each node in a Trie has 26 child nodes indicating one of 26 English characters. Trie also has a Boolean data element marking the end of a string. The structure of a Trie can differ to incorporate various use cases.
5. Threaded Binary Trees:
Threaded binary trees are used to make an iterative algorithm for the inorder traversal of a tree. The idea is to point the null right child of any node to its inorder successor. There are two types of threaded binary trees.
a. Single-threaded:
Only the right null child of each node point towards the inorder successor of the tree.
b. Double-threaded:
Both the left and the right null child of each node point towards the inorder predecessor and inorder successor of the node respectively.
6. Expression Trees:
Expression tree is a special type of tree used to solve mathematical expressions. Each non-leaf node in an expression tree represents an operator and each leaf node is an operand.
Ending note:
In this blog, we discussed the Tree in data structures and Tree Transversal Algorithms used in Machine learning.
Special trees help to solve different problems in optimized time and space. Overall, trees are advantageous as they depict a structural correlation between the data. Moreover, trees also provide flexible insertion and sorting advantages.
Get one step closer to your dream job!
Prepare for your programming interview with Data Structures & Algorithms Interview Questions You’ll Most Likely Be Asked. This book has a whopping 200 technical interview questions on lists, queues, stacks, and algorithms and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.

Blogs On Programming

by Vibrant Publishers on May 22, 2022

Algorithms are sequenced steps of instructions proposing a generalized solution for a problem. Algorithms determine the efficiency of a coding solution. They are divided into different categories depending on their nature of implementation. In this blog, we will discuss Sorting Algorithms focusing on their description, the way they work, and some common implementations.
Sorting Algorithm:
As the name describes, the sorting algorithm is generally used to sort an array in a particular order. An array sorted in an ascending order means that every successor element in an array is greater than the previous one. A sorting algorithm takes an array as an input, performs sorting operations on it, and outputs a permutation of the array that is now sorted.
Array {a, b, c, d} is alphabetically sorted.
Array {1, 2, 3, 4} is sorted in ascending order.
Generally, sorting algorithms divide into two types, Comparison Sort and Integer Sort.
Comparison Sort:
Comparison sort algorithms compare elements at each step to determine its position in the array. Such algorithms are easy to implement but are slower. They are bounded by O(nlogn), which means on average, comparison sorts cannot be faster than O(nlogn).
Integer Sort:
Integer sort algorithms are known as counting algorithms also. The integer sort algorithm checks for each element, say x, that how many elements are smaller than x and, place x at that location in an array. For element x, if 10 elements are less than x then the position of element x is at index 11. Such algorithms do not perform comparisons and are thus not bound by Ω (nlogn).
The efficiency of the selected sorting algorithm is determined by its run time complexity and space complexity.
Stability of a Sorting Algorithm:
A sorting algorithm is said to be stable if it preserves the order of the same or equal keys in the output array as it is in the input array.
Keys are the values based on which algorithm is sorting an array.
Below is an example of stable sorting,
Following is an unstable sorting as the order of equal keys is not preserved in the output.
Next, let’s discuss some commonly used sorting algorithms.
Insertion Sort:
This is a comparison-based algorithm. It takes one element, finds its place in the array, places it there, and in doing so sorts the whole array. For an array of size n, insertion sort considers the first element on the left as a sorted array and all the remaining n-1 elements on the right as an unsorted array. It then picks the first unsorted element (element number 2 of the array) and places it with a sorted element on the left moving elements if necessary. Now there are two arrays, sorted array of size 2 and unsorted of size n-2. The process continues until we get the whole array sorted, starting from the left. The best case of insertion sort is O(N) and worst-case O(N^2).
Selection Sort:
Selection sort is quite an easy algorithm in terms of implementation. It selects the smallest element present in an array and replaces it with the first element. Again scans for the smallest element in the remaining n-1 array and replaces it with the second element or the first element of the unsorted (n-1) array. The process of selecting the smallest element and replacing it continues until the whole array is sorted. The selection sort algorithm has the best and worst-case of O(N^2).
Merge Sort:
Merge is a comparison-based algorithm that works on merging two sorted arrays. The technique used by the merge sort is divide and conquer. It divides the array into two subarrays, performs sorting on them separately, either recursively or iteratively, and then merges these two sorted subarrays. The result is a sorted array. Merge sort works in O(nlogn) run time.
Heap Sort:
The comparison-based heap sort algorithm uses a binary heap data structure for sorting an array. A max-heap is formed from an unsorted array. The largest element from the binary heap is selected. As it is max-heap, the root is the largest value. This maximum value is placed at the end of an array. The heap shrinks by 1 element and the array increases by 1 element. Again, the above process is applied to the remaining heap. That is, convert it into max-heap and then replace the root (maximum) element with the last element. The process is repeated till we get a sorted array and the heap is shrunk to 0 elements. The run time of heap sort is O(nlogn).
Quick Sort:
Quicksort works on the divide and conquer strategy. It selects a pivot element and forms two subarrays around this pivot. Suppose the pivot element is A[y]. Two subarrays are sorted as A[x,… y-1] and A[y+1,… z] such that all elements less than the pivot are in one subarray, and all elements are greater than the pivot is in the second subarray. The subarrays can be sorted recursively or iteratively. The outcome is a sorted array. The average run time complexity of a quick sort is O(nlogn).
Bubble Sort:
This comparison-based sorting algorithm compares elements of an array in pairs. The algorithm ‘bubbles’ through the entire array from left to right, considering two elements at a time and swapping the greater element with the smaller element of the pair. For an array A, element A[0] is compared with element A[1]. If element A[0] > A[1], they are swapped. Next, elements A[1] and A[2] are compared and swapped if required. These two steps are repeated for an entire array. The average run time complexity of Bubble sort is O(n2) and is considered an inefficient algorithm.
Shell Sort:
Shell sort algorithm, in a way, works on insertion sort. It is considered faster than insertion sort itself. It starts by sorting subsets of the entire array. Gradually the size of subarrays is increased till we get a complete sorted array as a result. In other words, shell sort partially sorts the array elements and then applies insertion sort on the entire array.
Shell sort is generally optimized using different methods to increase the size of subsets. The most commonly used method is Knuth’s method. The worst case of shell run time is O(n^(3/2) using Knuth’s method.
Distribution Sort Algorithms:
Sorting algorithms where input is distributed into substructure, sorted, and then combined at the output are distribution sort algorithms. Many merge sort algorithms are distribution sort algorithms. Radix sort is an example of distribution sorting.
Counting Sort:
Counting sort is an integer-based sorting algorithm instead of a comparison-based sorting algorithm. The algorithm works on the assumption that every element in the input list has a key-value ranging from 0 to k, where k is an integer. For each element, the algorithm determines all the elements that are smaller than it, and in this way places that element at an appropriate location in the output list. For this, the algorithm maintains three lists – one as an input list, the second as a temporary list for key values, and the third as an output list.
Counting sort is considered as an efficient sorting algorithm with a run time of Θ(n) where the size of the input list, n, is not much smaller than the largest key value of the input list, k.
Radix Sort:
Radix sort works on the subarrays and uses the counting sort algorithm to sort them. It groups the individual keys that take the same place and value. It sorts from the least significant digit to the most significant digit. In base ten, radix sort will first sort digits in 1’s place, then at 10’s place, and so on. The sorting is done using the counting sort algorithm. Counting sort can sort elements in one place value. As an example, base 10, it can sort from 0 to 9. For 2-digit numbers, it will have to deal with base 100. Radix sort, on the other hand, can handle multi-digit numbers without dealing with a large range of keys.
The list [46, 23, 51, 59] will be sorted as [51, 23, 46, 59] as for 1’s place value 1<3<6<9. Sorting of second place value will give [23, 46, 51, 58].
Another important property of the radix sort is its stability. It is a stable sorting algorithm. The runtime of radix sort is O(n) which means it takes a linear time for sorting.
Ending Note:
Sorting algorithms are quite important in computer science as they help in reducing the complexity of the problem. The sorting algorithms that we discussed above have useful applications in databases, search algorithms, data structure, and many other fields of computer science.
Get one step closer to your dream job!
Prepare for your programming interview with Data Structures & Algorithms Interview Questions You’ll Most Likely Be Asked. This book has a whopping 200 technical interview questions on lists, queues, stacks, and algorithms and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.

Blogs On Programming

Future Job Market for Java Professionals

by Vibrant Publishers on May 22, 2022

When contemplating opportunities for Java professionals, many dismiss it as an outdated technology option, thinking, will Java remain in demand? Let us dispel this illusion for you with some illuminating facts.
According to CODEGYM, the Java job market had more opportunities in the year 2020 than ever before, regardless of the global pandemic crisis. The market witnessed several Java job vacancies. Even the job switch rate for Java professionals is less than 8%, compared to 27% for the software development profession as a whole and 35% for database administrators. Also, given a higher-level administrative role, a significant number of Java developers simply prefer to retain their position. If this isn’t the best proof that Java programming is the ideal career choice for the vast majority of professionals, we don’t know what is. We can state that the future for Java professionals is as bright as the rising sun.
Now that it’s pretty clear the Java job market is not going anywhere, Java job vacancies will witness an increasing trend in the coming decade. So, let us begin with what this beloved programming language has in store for all Java Job seekers out there.
What can Java professionals expect from the Java job market in the near future?
According to reports, the IT industry will witness significant job shifts, resulting in numerous Java job vacancies. With the advancement of the tech industry, the Java job market will experience a favorable turn as many corporations shall rely on this programming language to create their core offerings.
Concerning the Java job vacancies, it reaches a rather high ranking season after season. The above facts back up the Java obsession. Fresh graduates and experienced professionals remain inclined towards Java. Presently, there are over 7 million Java developers, with around 0.5 million new coders joining the Java community each year. This showcases the level of aggressive competition in the field for Java developers.
How can Java professionals achieve lucrative jobs in the cut-throat Java job market?
Java is unquestionably an excellent way to begin for people who are new to coding. Even the Java professionals who have sustained in the market till now need to stay updated with the market needs to scale their careers to new heights. However, getting past the screenings and interviews for a Java job is no cakewalk. Coders have to go through various technical rounds of tests and interviews. Adequate preparation of commonly asked questions in a Java job interview is crucial for selection. One can unearth smart tactics from specially compiled books that put the candidate in charge and push them to do their best. Acquiring additional skills in Java-related technologies can also provide you an edge in the Java job market.
What is the scope for other Java-related technologies/platforms?
Java professionals can multiply their prospects for a high-paying job by upgrading their skills and mastering related tools and frameworks. JavaScript remains one of the most talked-about programming languages apart from Java. This language has several frameworks (VUE.js, jQuery, Angualr.js, and React.js) with a strong presence in the market and is continually expanding. As per Devskillers report, Javascript stands as the most sought-after IT skill in developers worldwide. At least 80 percent of the websites rely on a third-party JavaScript web framework or library to perform client-side scripting tasks.
Learn more about Java and Java based technologies for job interviews from handy guides by Vibrant Publishers.
To understand the potential of Java in the job market, it also necessary to know how it fairs in contrast to other popular languages. So, before we move further, let’s take a peek at Java’s performance in contrast to other competing languages such as Python and C++.
Where does Java stand in comparison to other eminent languages like Python and C++?
The most common programming languages used besides Java are C++ and Python. These have been the foundation for almost all prominent development projects worldwide.
C++ has grown in popularity as a fast and compiled programming language, and it is among the first languages that a newbie programmer learns. However, in contrast to Java, the language has lower portability. C++ programmers are anticipated to have a satisfactory amount of opportunities at least until 2022. Candidates with substantial expertise will have bright prospects and multiple avenues in C++ programming.
On the other hand, Python is an interpreted language that is both current and quick to type. The code is 3-4 times shorter than that of Java. There has always been a tussle between Python and Java in terms of their demand. Python programmers have a very strong job market for web development projects. As a result, there is always business for a Python coder.
Java remains popular because of its platform independence. Programmers have used this language to give life to many popular apps and software systems. Java is also being employed to create solutions for machine learning, genetic programming, and multi-robotic systems, all of which have a promising future—no wonder the Java job market continues to thrive.
Conquer greater heights in the Java job market with robust preparedness!
Some of the world’s biggest companies, such as Twitter, Google, LinkedIn, eBay, and Amazon, use Java to establish a coherent architecture between their web application and backend systems. This fact emphasizes the scope and appeal of Java Programmers. As a result, the salaries of Java Programmers in many countries are among the highest in the computer and internet networking industries. Artificial Intelligence, Big Data, and Blockchain are some of the new technologies where the scope of Java is likely to expand.
Therefore, being on your feet and educating yourself to meet industry standards improves your chances of landing a successful job in Java programming. A great deal of preparation with assistance from Vibrant Publishers will help you accomplish this goal.
Remember friends, the best way to predict your future is to create it! We wish you success!
The new edition of the book, Core Java Interview Questions You’ll Most Likely Be Asked, released in Sep 2021, has all the goods of the old edition plus additions like latest Java interview questions, scenario based questions and a tutorial on building an ATS compliant resume. The book is ideal for job seekers with zero to five years of experience.

Blogs On Programming

by Vibrant Publishers on May 22, 2022

Searching algorithms belong to a category of widely used algorithms in Computer Science. Searching algorithms are mainly used to lookup an entry from data structures, such as linked lists, trees, graphs, etc. In this blog, we will discuss the working, types, and a few implementations of searching algorithms.
Searching Algorithm:
A searching algorithm is used to locate and retrieve an element – say x – from a data structure where it is stored – say a list. The algorithm returns the location of the searched element or indicates if it is not present in the data structure.
Searching algorithms are categorized into two types depending upon their search strategy.
Sequential Search: These algorithms linearly search the whole data structure by visiting every element. These algorithms are, therefore, comparatively slower. An example is a linear search algorithm.
Interval Search: these algorithms perform searching in a sorted data structure. The idea is to target the center of the data structure and then perform a search operation on the two halves. These algorithms are efficient. An example is a binary search algorithm.
Next, let’s discuss some commonly used searching algorithms.
Linear Search:
The very basic searching algorithm traverses the data structure sequentially. The algorithm visits every element one by one until the match is found. If the element is not present in the data structure, -1 is returned. Linear search applies to both sorted and unsorted data structures. Although the implementation is quite easy, linear search has a time complexity of O(n), where n is the size of the data structure. It means it is linearly dependent on the size and quite slow in terms of efficiency.
Binary Search:
Binary search works on a sorted data structure. It finds the middle of the list. If the middle element is the required element, it is returned as a result. Otherwise, it is compared with the required element. If the middle element is smaller than the required element, the right half is considered for further search, else the left half is considered. Again, the middle of the selected half is selected and compared with the required value. The process is repeated until the required value is found.
Binary search with time complexity of O(nlogn) is an efficient search algorithm, especially for large and sorted data records.
Interpolation Search:
Interpolation search is a combination of binary and linear search algorithms. Also called an extrapolation search, this algorithm first divides the data structure into two halves (like binary search) and then sequentially searches in the required half (linear search). In this way, it saves from dividing the data structure into two halves every time as in a binary search.
For an array, arr[], x is the element to be searched. The ‘start’ is the starting index of the array and ‘end’ is the last index. The position of the element x is calculated by the formula:
position = start + [ (x-arr[start])*(end-start) / (arr[end]-list[start]) ]
The position is calculated and returned if it is a match. If the result is less than the required element, the position in the left sub-array is calculated. Else, the position is calculated in the right sub-array. The process is repeated until the element is found or the array reduces to zero.
Interpolation search works when the data structure is sorted. When the elements are evenly distributed, the average run time of interpolation search is O(log(log(n))).
Breadth-First Search Algorithm:
BFS is used to search for a value in a graph or tree data structure. It is a recursive algorithm and traverses the vertices to reach the required value. The algorithm maintains two lists of visited vertices and non-visited vertices. It is implemented using a queue data structure. The idea is to visit every node at one level (breadth-wise) before moving to the next level of the graph.
The algorithm selects the first element from the queue and adds it to the visited list. All the adjacent vertices (neighbors) of this element are traversed. If a neighbor is not present in the visited list, it means it is not traversed yet. It is placed at the back of the queue and in the visited list. If a neighbor is present in the visited list already, it means it has been traversed and is ignored. The process is repeated until the required element is found or the queue is empty.
The following example will make the concept clear of breadth-first traversal.
Depth First Search Algorithm:
This is another recursive search algorithm used for graphs and trees. In contrast to the BFS algorithm, DFS works on the strategy of backtracking. The algorithm keeps on moving to the next levels (depth-wise) until there are no more nodes. The algorithm then backtracks to the upper level and repeats for the other node. The DFS algorithm is implemented using a stack data structure.
The algorithm pops an element from the top of the stack and places it in the visited list. All the nodes adjacent to this element are also selected. If any adjacent node value is not present in the visited list, it means it is not traversed yet, and it is placed at the stack top. Steps are repeated until the element is found or the stack is empty.
See the following example for a better understanding of depth-first traversal.
Next, let’s discuss some common algorithm solutions for searching problems.
Problem 1: Find the largest element in an array of unique elements.
Algorithm: We can use Linear Search to find the largest element in an array of size n. The time complexity would be O(n).
Initialize an integer value as max = 0. In a loop, visit each element in an array and compare it with the max value. If the element is greater than the max, swap it with the max value. When the loop ends, the max will contain the largest element of the array.
Problem 2: Find the position of an element in an infinite sorted array
Algorithm: A sorted array indicates that a binary search algorithm can work here. Another point to note is that the array is infinite. It means we don’t know the maximum index or upper bound of the array. To solve such a problem, we will first find the upper index and then apply the binary algorithm on the array.
To find the upper bound/index of the infinite array:
Initialize integer min, max, and value for minimum, maximum, and first array element. Set min, max, and value as 0, 1, and arr[0].
Run a loop until the value is less than the element we are looking for. If the value is less than the element swap min with the max, double the max, and store arr[max] in value. At the end of the loop, we get lower and upper bounds of the array in min and max. We can call binary search on these values.
For binary search:
If the max value is greater than 1, find mid of the array by min + (max -1)/2.
If mid is the value we are looking for, return mid. If mid is less than the search value, call binary search on the left half of the array. Take mid-1 as the maximum bound. Else, call binary search on the right half of the array, and take mid+1 as the minimum bound.
Problem 3: Find the number of occurrence for an element in a sorted array
Algorithm: To find how many times the element appears in a sorted array, we can use both linear and binary search.
In linear search, maintain two integer variables result and key. Loop over the entire array, on every occurrence of the key, increment the result by 1. At the end of the loop, the result contains the number of occurrences of the key.
Problem 4: Find if the given array is a subset of another array:
Algorithm: For this problem, we don’t know if the array is sorted or not.
The algorithm for linear search includes running two loops for array 1 and array 2. The outer loop selects elements of array 2 one by one. The inner loop performs a linear search in array 1 for array 2’s element. The complexity of this algorithm is O(mxn), where m and n are the sizes of array 1 and array 2, respectively.
The problem can also be solved using binary search after sorting the array. First sort array 1. The time complexity of sorting is O(m log m). For each element of array 2, call binary search on array 1. Time complexity is O(n log m). The total time complexity for the algorithm is O(m logm + n logm), where m and n are the sizes of arrays 1 and 2, respectively.
Ending Note:
In this blog on searching algorithms, we discussed a very basic algorithm type. Search algorithms are the most commonly used algorithms, as the concept of searching data and records is quite common in software development. The records from databases, servers, files, and other storage locations are searched and retrieved frequently. Efficient searching algorithms with efficient time and storage complexity play a key role here.
Get one step closer to your dream job!
Prepare for your programming interview with Data Structures & Algorithms Interview Questions You’ll Most Likely Be Asked. This book has a whopping 200 technical interview questions on lists, queues, stacks, and algorithms and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.

Blogs On Programming

by Vibrant Publishers on May 22, 2022

The concept of data compression is very common in computer applications. Information is transmitted as a bit-stream of 0’s and 1’s over the network. The goal is to transmit information over the network with a minimum number of bits. Such transmission is faster in terms of speed and bandwidth. Huffman Coding is one such technique of data compression. This blog discusses Huffman coding, how it works, its implementation, and some applications of the method.
What is Huffman Coding?
This data compression method assigns codes to unique pieces of data for transmission. The data byte occurring most often is assigned a shorter code whereas the next most occurring byte is assigned a bit longer code. In this way, the whole data is encoded into bytes.
The two types of encoding methods used by Huffman coding are Static Huffman encoding and Dynamic Huffman encoding.
Static Huffman encoding encodes text using fixed-sized codes. Dynamic Huffman coding, on the other hand, encodes according to the frequency of data characters. This approach is often known as variable encoding.
Huffman Coding Implementation:
Huffman coding is done in two steps. The first step is to build a binary tree of data characters depending upon the character frequency. The second step is to encode these binary codes in bits of 0’s and 1’s.
Building a Huffman Tree:
The binary tree consists of two types of nodes; leaf nodes and parent nodes. Each node contains the number of occurrences of a character. The parent node or internal node has two children. The left child is indicated by bit 0 and right child by bit 1. The algorithm for building a Huffman binary tree using a priority queue includes the following steps:
1- Each character is added to the queue as a leaf node.
2- While the queue is not empty, pick two elements from the queue front, and create a parent node with these two as child nodes. The frequency of the parent node is the sum of these two nodes. Add this node to the priority queue.
Consider the following example for better understanding. Suppose we have characters p, q, r, s, and t with frequencies 4, 6, 7, 7, and 16.
Now, build the binary tree:
Encoding the Huffman binary tree:
As discussed above, the left nodes are assigned 0 bit and right nodes 1 bit. In this way, codes are generated for all paths from the root node to any child node.
Huffman Compression technique:
While building the Huffman tree, we have applied the technique of Huffman compression. It is also called Huffman encoding. We have encoded the data characters into the bits of 0s and 1s. This method reduces the overall bit size of the data. Hence, it is called the compression technique.
Let’s see how our above data characters p, q, r, s, and t are compressed.
Considering each character is represented by 8 bits or 1 byte in computer language, the total bit size of data characters (4 p, 6 q, 7 r, and 16 t) is 40 * 8 = 320. After the Huffman compression algorithm, the bit size of data reduces to 40 + 40 + 88 = 168 bits.
Huffman Decompression Technique:
For the decompression technique, we need codes. Using these codes, we traverse the Huffman tree and decode the data.
We start from the root node, assign 0 to the left node, and 1 to the right node. When a leaf node encounters, we stop.
To decode character t, we will start from the root node, and traverse the path of 111 till we reach a leaf node that is, t.
Time Complexity of Huffman Coding Algorithm:
As Huffman coding is implemented using a binary tree, it takes O(nlogn) time, where n is the number of data characters compressed.
Ending Note:
In this blog on Huffman coding, we have discussed an algorithm that forms the basis of many compression techniques used in software development. Various compression formats like GZIP and WinZip use Huffman coding. Image compression techniques like JPEG and PNG also work on the Huffman algorithm. Although it is sometimes deemed as a slow technique, especially for digital media compression, the algorithm is still widely used due to its storage efficiency and straightforward implementation.
Get one step closer to your dream job!
Prepare for your programming interview with Data Structures & Algorithms Interview Questions You’ll Most Likely Be Asked. This book has a whopping 200 technical interview questions on lists, queues, stacks, and algorithms and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.

Blogs On Programming

by Vibrant Publishers on May 22, 2022

In this blog on Graphs in Data Structures, we will learn about graph representation, operations, some graph-related algorithms, and their uses. The graph data structure is widely used in real-life applications. Its main implementation is seen in networking, whether it is on the internet or in transportation. A network of nodes is easily represented through a graph. Here, nodes can be people on a social media platform like Facebook friends or LinkedIn connections or some electric or electronic components in a circuit. Telecommunication and civil engineering networks also make use of the graph data structure.
Graphs in Data Structures:
Graphs in data structures consist of a finite number of nodes connected through a set of edges. Graphs visualize a relationship between objects or entities.
Nodes in graphs are entities. The relationship between them is defined through paths called edges. Nodes are also called Vertices of graphs. Edges represent a unidirectional or bi-directional relation between the vertices.
An undirected edge between two entities or nodes, A and B, represents a bi-directional relation between them.
The root node is the parent node of all other nodes in a graph, and it does not have a parent node itself. Every graph has a single root node. Leaf nodes have no child nodes but only contain parent nodes. There are no outgoing edges from leaf nodes.
Types of Graphs:
Undirected Graph: A graph with all bi-directional edges is an undirected graph. In an undirected graph below, we can traverse from node A to node B as well as from node B to node A as the edges are bi-directional.
Directed Graph: This graph consists of all unidirectional edges. All edges are pointing in a single direction. In a directed graph below, we can traverse from node A to node B but not from node B to node A as the edges are unidirectional.
A tree is a special type of undirected graph. Any two nodes in a tree are connected only by one edge. In this way, there are no closed loops or circles in a tree., whereas looped paths are present in graphs.
The maximum number of edges in a tree with n nodes is n-1. In graphs, nodes can have more than one parent node, but in trees, each node can only have one parent node except for the root node that has no parent node.
Graph representation:
Usually, graphs in data structures are represented in two ways. Adjacency Matrix and Adjacency List. First, let’s see: what is an adjacency in a graph? We say that a vertex or a node is adjacent to another vertex if they have an edge between them. Otherwise, the nodes are non-adjacent. In the graph below, vertex B is adjacent to C but non-adjacent to node D.
Adjacency Matrix:
In the adjacency matrix, a graph is represented as a 2-dimensional array. The two dimensions in the form of rows and columns represent the vertices and their value. The values in the form of 1 and 0 tell whether the edge is present between the two vertices or not. For example, if the value of M[A][D] is 1, it means there is an edge between the vertices A and D of the graph.
See the graph below and then its adjacency matrix representation for better understanding.
Adjacency matrix gives the benefit of an efficient edge lookup, that is, whether the two vertices have a connected edge or not. But at the same time, it requires more space to store all the possible paths between the edges. For fast results, we have to compromise on space.
Adjacency Lists:
In the adjacency list, an array of lists is used to represent a graph. Each list contains all edges adjacent to a particular vertex. For example, list LA will contain all the edges adjacent to the vertex A of a graph.
As adjacency list stores information only for edges that are present in the graph, it proves to be a space-efficient representation. For cases such as sparse matrix where zero values are more, adjacency list is a good option, whereas an adjacency matrix will take up a lot of unnecessary space.
The adjacency list of the above graph would be as,
Graph Operations:
Some common operations that can be performed on a graph are:
Element search
Graph traversal
Insertion in graph
Finding a path between two nodes
Graph Traversal Algorithms:
Visiting the vertices of a graph in a specific order is called graph traversal. Many graph applications require graph traversal according to its topology. Two common algorithms for graph traversal are Breadth-First Search and Depth First Search.
Breadth-First Search Graph Traversal:
The breadth-first search approach visits the vertices breadthwise. It covers all the vertices at one level of the graph before moving on to the next level. The breadth-first search algorithm is implemented using a queue.
The BFS algorithm includes three steps:
Pick a vertex and insert all its adjacent vertices in a queue. This will be level 0 of the graph.
Dequeue a vertex, mark it visited, and insert all its adjacent vertices in the queue. This will be level 1. If the vertex has no adjacent nodes, delete it.
Repeat the above two steps until the whole graph is covered or the queue is empty.
Consider the graph below where we have to search node E.
First, we will visit node A then move on to the next level and visit nodes B and C. When nodes at this level are finished we move on to the next level and visit nodes D and E.
Nodes once visited can be stored in an array, whereas adjacent nodes are stored in a queue.
The time complexity of the BFS graph is 0(V+E), where V is the number of vertices and E is the number of edges.
Depth First search Graph Traversal:
The depth-first search (DFS) algorithm works on the approach of backtracking. It keeps moving on to the next levels, else it backtracks. DFS algorithms can be implemented using stacks. It includes the following steps.
Pick a node and push all its adjacent vertices onto a stack.
Pop a vertex, mark it visited, and push all its adjacent nodes onto a stack. If there are no adjacent vertices, delete the node.
Repeat the above two steps until the required result is achieved or the stack gets empty.
Consider the graph below:
To reach node E we start from node A and move downwards, as deep as we can go. We will reach node C. Now, we backtrack to node B, to go downwards towards another adjacent vertex of B. In this way, we reach node E.
IsReachable is a common problem scenario for graphs in Data Structures. The problem states to find whether there is a path between two vertices or not. That is whether a vertex is reachable from the other vertex or not? If the path is present, the function returns true else false.
This problem can be solved with both of the traversal approaches we have discussed above, that is BFS and DFS.
Consider a function with three input parameters. The graph and the two vertices we are interested in. Let’s call them u and v. Using a BFS algorithm, the solution consists of the following steps:
Initialize an array to store visited vertices of size equal to the size of the graph.
Create a queue and enqueue the first vertex, in this case, u = 1. Mark u as visited and store it in the array.
Add all the adjacent vertices of u in the queue.
Now, dequeue the front element from the queue. Enqueue all its adjacent vertices in the queue. If any vertex is the required vertex v, return true. Else continue visiting adjacent nodes and keep on adding them to the queue.
Repeat the above process in a loop. Since there is a path from vertex 1 to vertex 3 we will get a true value.
If there is no path, for example, from node 1 to node 8, our queue will run empty, and false will be returned, indicating that vertex u is not reachable to vertex v.
Below is an implementation of the IsReachable function for the above graph in Python programming language.
Weighted Graphs:
Weighted graphs or di-graphs have weights or costs assigned to the edges present in the graph. Such weighted graphs are used in problems such as finding the shortest path between two nodes on the graph. These graph implementations help in several real-life scenarios. For example, a weighted graph approach is used in computing the shortest route on the map or to trace out the shortest path for delivery service in the city. Using the cost on each edge we can compute the fastest path.
Weighted graphs lead us to an important application of graphs, namely finding the shortest path on the graph. Let’s see how we can do it.
Finding the Shortest Path on the Graph:
The goal is to find the path between two vertices such that the sum of costs of their edges is minimum. If the cost or weight of each edge is one, then the shortest path can be calculated using a Breadth-first Search approach. But if costs are different, then different algorithms can be used to get the shortest path.
Three algorithms are commonly used for this problem.
Bellman Ford’s Algorithm
Dijkstra’s Algorithm
Floyd-Warshall’s Algorithm
This is an interesting read on shortest path algorithms.
Minimum Spanning Tree:
A spanning tree is a sub-graph of an undirected graph containing all the vertices of the graph with the minimum possible number of edges. If any vertex is missing it is not a spanning tree. The total number of spanning trees that can be formed from n vertices of a graph is n(n-2).
A type of spanning tree where the sum of the weights of the edges is minimum is called a Minimum Spanning Tree. Let’s elaborate on this concept through an example.
Consider the following graph.
The possible spanning trees for the above graph are:
The Minimum Spanning tree for the above graph is where the sum of edges’ weights = 7.
Incidence Matrix:
Incidence matrix relates the vertices and edges of a graph. It stores vertices as rows and edges as columns. In contrast, to the adjacency matrix where both rows and columns are vertices.
An incidence matrix of an undirected graph is nxm matrix A where n is the number of vertices of the graph and m is the number of edges between them. If Ai,j =1, it means the vertex vi and edge ej are incident.
Incidence List:
Just like the adjacency list, there is also an incidence list representation of a graph. The incident list is implemented using an array that stores all the edges incident to a vertex.
For the following graph, the list Li shows the list of all edges incident to vertex A in the above graph.
Ending Note:
In this blog on Graphs in Data Structures, we discussed the implementation of graphs, their operations, and real-life applications. Graphs are widely used, especially in networking, whether it is over the internet or for mapping out physical routes. Hopefully, now you have enough knowledge regarding a data structure with extensive applications.
Get one step closer to your dream job!
Prepare for your programming interview with Data Structures & Algorithms Interview Questions You’ll Most Likely Be Asked. This book has a whopping 200 technical interview questions on lists, queues, stacks, and algorithms and 77 behavioral interview questions crafted by industry experts and interviewers from various interview panels.

Blogs On Programming

by Vibrant Publishers on May 22, 2022

Introduction
In this world of constantly changing specifications and requirements, it’s not an easy task to be updated with your web services. Meticulous planning and documentation is a must to win the race. There are a few tools in the market that will help you do the job. We will discuss one such tool Swagger – an API documentation and development toolset.
Swagger
Swagger is not just a platform, but a set of tools that will help you in the process of documenting and developing APIs. All these APIs follow the Open API specification standard. Their official website (www.swagger.io) describes three different toolsets:
Swagger Editor
Swagger UI
Swagger Codegen
Let us have a detailed look into these toolsets:
Swagger Editor
The swagger editor is responsible for designing and creating APIs based on the Open API specification. This editor can be installed on your computer and you can use it by logging into your swagger account. The editor has an intelligent auto-completion feature and has a visual editor feature where you can interact with the API during its creation time itself. Below is the screenshot from the swagger editor.
Swagger UI
The swagger UI is another tool that will allow you to visually see and interact with the APIs. These APIs can be created by Swagger Editor or there is provision for uploading other APIs as well. Once the API is given for the tool, it will automatically generate the visual schema of your API. One of the very useful features of this tool is that it will also create documentation for your API. This is very useful while implementing the API in the back end or at the client-side. You can execute the APIs from there itself and test its functionality as well.
Swagger Codegen
The Codegen tool takes care of defining the servers and client-side SDKs for the API that you are building. In this way you don’t need to worry about those aspects while developing and API. It supports over 20 programming languages and the creation of these stubs and SDKs are quite an easy task in Codgen. This will help you to focus only on developing APIs than thinking about the client and server implementations.
Opening an account in Swagger
An account in Swagger is quite easy to start. Just visit the website www.swagger.io and sign up for an account by providing some basic details. You can also open an account providing your Github credentials. Once you create an account, you will be taken into the swagger hub dashboard. Below is the screenshot of the same.
From the dashboard, you can create or import your APIs. These APIs can be public or private according to your requirement. If you need other teams from swagger to see and collaborate on the API development, then they have to be public.
Once you are working on your APIs, the view will be changed to something as shown below:
Here, you have access to all the tools, the editor, UI, API docs and Codegen. This will make working with the entire process a seamless experience. Once the API is created, there will be an option to download the corresponding server stub and client SDK.
The default account is free which limits the use only to one user per team. If you need more paid options available.
Summary
Swagger is a set of tools that is targeted towards API development and documentation. The visual editor and codegen tools are some of the compelling features of this platform. As an API developer, it is worth trying swagger for your use cases.

Blogs On Programming

by Vibrant Publishers on May 22, 2022

Introduction
APIs are the heart of every web service. When we deal with enterprise web services, there will be requirements for building thousands of APIs and it is not a simple task for the developer to create all of them manually. To resolve this problem there are many API tools available which will handle the complete lifecycle of an API. In this article, we will discuss such a tool called POSTMAN.
POSTMAN
According to the official website (www.postman.com), postman is a collaboration platform for API development. Yes, it allows multiple people to come together and work on creating APIs for their applications. Apart from this, it’s a platform that will take care of all the processes involved in an API creation and management.
Use of POSTMAN in the API world
As mentioned above the platform takes care of all requirements in API development. Let us take a look at some of the features the postman offers:
Accessibility: The platform is a hosted service and all you need to start using it is an account in postman. You can either use the browser to login to postman or use their desktop app and start working.
Organized: The APIs that you create with postman can be organized well with the feature called collections. They can be placed in folders and subfolders based on your projects.
Monitoring: Another exciting feature of the postman is the power to monitor the APIs that you have created. This will make your life much easier due to the instant notification of failures in any of your API services.
Testing: Nothing is solid unless you test it thoroughly. Postman offers both manual and automated testing of your APIs. You can define test cases and discuss with your team over the platform itself and put the tests into action. Also, these tests can be integrated into your CI/CD pipeline.
Versioning: You might want to provide APIs for different customers that utilize your services. Or there can be different releases of the same API whenever your application releases new versions. In these cases versioning your API is important. The postman platform offers better tagging and versioning provisions that you and your team can utilize effectively.
Getting started with POSTMAN
Inorder to use postman, you need to sign up for an account first. Below is the login/signup screen of the postman.
Once you have an account you can download the postman application and start building your APIs, Test cases, etc.
In postman you have the choice of selecting different plans based on your requirement and pricing. Initially you will be provided with a Free Plan where there are certain limitations on the number of API that you can create and manage. When your requirements are high and there is a team working with you, then it’s good to choose a Team, Business or Enterprise plan as per your convenience. Below is the screenshot of different plans that Postman offers.
Using POSTMAN
After successfully creating an account in postman, you can download the client application on your desktop and start working with it.
Once you login to your postman application, the dashboard will look like the one below.
Here you can create your APIs, put them into collections and you can perform monitoring, testing, etc. on your APIs. The UI is very intuitive and can be learned quite quickly.
You can create APIs based on JSON or YAML schema formats. Also, there is support for Open API, GraphQL and RAML schemas.
There is a very detailed tutorial on how to use postman on their official website: https://learning.postman.com. You must pay a visit here if you want to learn about the platform in detail.
Summary
Postman is a platform to create and manage the lifecycle of APIs. It is versatile and has a lot of features including team collaboration and monitoring etc. Using postman for your project will not only save time but will provide a clean and neat way of managing your APIs as well.