**1. Describe the method of analysis of Algorithm. What do you mean by best case, worst case and average case time complexity of an algorithm.**

ANS:

**Algorithm :**In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks.

One of the most important aspects of algorithm design is creating an algorithm that has an efficient run-time.

__Example__*Problem:*Given a list of positive numbers, return the largest number on the list.

*Algorithm:*

- Set
*max*to 0. - For each number
*x*in the*list*, compare it to max. If*x*is larger, set*max*to*x*. *max*is now set to the largest number in the*list*.

__Analysis of Algorithms__**The term "**

**analysis of algorithms**" was coined by

**Donald Knuth**.

Analysis of algorithms is the determination of the amount of time and space resources required to execute it.

Usually, the efficiency or running time of an algorithm is stated as a function relating the input length to the number of steps, known as

**time complexity**, or volume of memory, known as

**space complexity**.

In computer science, the time complexity is the computational complexity that describes

__the amount of time it takes to run an algorithm__. Time complexity is commonly estimated by counting the number of elementary operations performed by the algorithm, supposing that each elementary operation takes a fixed amount of time to perform.

A complete analysis of the running time of an algorithm involves the following steps:

- Implement the algorithm completely.
- Determine the time required for each basic operation.
- Identify unknown quantities that can be used to describe the frequency of execution of the basic operations.
- Develop a realistic model for the input to the program.
- Analyze the unknown quantities, assuming the modeled input.
- Calculate the total running time by multiplying the time by the frequency for each operation, then adding all the products.
- Classical algorithm analysis on early computers could result in exact predictions of running times. Modern systems and algorithms are much more complex, but modern analyses are informed by the idea that exact analysis of this sort could be performed in principle

Analysis of algorithm is the process of analyzing the problem-solving capability of the algorithm in terms of the time and size required (the size of memory for storage while implementation).

However, the main concern of analysis of algorithms is the required time or performance. Generally, we perform the following types of analysis −

**Worst-case**− The maximum number of steps taken on any instance of size a.

**Best-case**− The minimum number of steps taken on any instance of size a.

**Average case**− An average number of steps taken on any instance of size a.

Algorithm analysis is an important part of computational complexity theory, which provides theoretical estimation for the required resources of an algorithm to solve a specific computational problem. Most algorithms are designed to work with inputs of arbitrary length.

**2. Write the Complete Algorithm for Quick Sort. Calculate its Complexity in best case, worst case and average cases.**

ANS:

**Quick Sort**is a highly efficient sorting algorithm and is based on partitioning of array of data into smaller arrays. A large array is partitioned into two arrays one of which holds values smaller than the specified value, say pivot, based on which the partition is made and another array holds values greater than the pivot value.

Quick sort is better because of few decent reasons:

- It does not need any temporary storing memory; which means if you don’t need to invest any more storage capacity during the process. It makes sense when your data is quite large.
- It is very fast because it uses divide and conquers. In this algorithm, we choose a pivot and divide it into two sub-arrays and then repeat the process again.

**Algorithm:**

**Step 1**− Choose the highest index value has pivot

**Step 2**− Take two variables to point left and right of the list excluding pivot

**Step 3**− left points to the low index

**Step 4**− right points to the high

**Step 5**− while value at left is less than pivot move right

**Step 6**− while value at right is greater than pivot move left

**Step 7**− if both step 5 and step 6 does not match swap left and right

**Step 8**− if left ≥ right, the point where they met is new pivot

Quick Sort Pictorial Representation |

__Complexity Analysis of Quick Sort__**For an array, in which**

**partitioning**leads to unbalanced sub-arrays, to an extent where on the left side there are no elements, with all the elements greater than the pivot, hence on the right side.

And if keep on getting unbalanced sub-arrays, then the running time is the worst case, which is

**O(n2)**

Where as if

**partitioning**leads to almost equal sub-arrays, then the running time is the best,

__with__

*time complexity*as

__O(n*log n)__.*Worst Case Time Complexity [ Big-O ]:***O(n2)***Best Case Time Complexity [Big-omega]:***O(n*log n)***Average Time Complexity [Big-theta]:***O(n*log n)***Space Complexity:***O(n*log n)**

__Space required by quick sort is very less__, only O(n*log n) additional space is required. Quick sort is not a stable sorting technique, so it might change the occurrence of two similar elements in the list while sorting.

**3. Explain Divide and Conquer Technique. Design a recursive algorithm for binary search.**

ANS:

In

*divide and conquer*approach, the problem in hand, is divided into smaller sub-problems and then each problem is solved independently. When we keep on dividing the sub-problems into even smaller sub-problems, we may eventually reach a stage where no more division is possible. Those "atomic" smallest possible sub-problem (fractions) are solved. The solution of all sub-problems is finally merged in order to obtain the solution of an original problem.

Broadly, we can understand divide-and-conquer approach in a three-step process:

__Divide/Break__This step involves breaking the problem into smaller sub-problems. Sub-problems should represent a part of the original problem. This step generally takes a recursive approach to divide the problem until no sub-problem is further divisible. At this stage, sub-problems become atomic in nature but still represent some part of the actual problem.

__Conquer/Solve__This step receives a lot of smaller sub-problems to be solved. Generally, at this level, the problems are considered 'solved' on their own.

__Merge/Combine__When the smaller sub-problems are solved, this stage recursively combines them until they formulate a solution of the original problem. This algorithmic approach works recursively and conquer & merge steps works so close that they appear as one.

Binary search is a fast search algorithm with

*run-time complexity*of

**Ο(log n)**. This search algorithm works on the principle of divide and conquer. For this algorithm to work properly,

*the data collection should be in the sorted form*.

Search a sorted array by repeatedly dividing the search interval in half. Begin with an interval covering the whole array. If the value of the search key is less than the item in the middle of the interval, narrow the interval to the lower half. Otherwise narrow it to the upper half. Repeatedly check until the value is found or the interval is empty.

**Example :**

Binary Search : Divide & Conquer Approach |

__Algorithm__- Compare x with the middle element.
- If x matches with middle element, we return the mid index.
- Else If x is greater than the mid element, then x can only lie in right half subarray after the mid element. So we recur for right half.
- Else (x is smaller) recur for the left half.

**4. Define Hashing Technique with help of an example.**

ANS:

A fixed process converts a key to a hash key is known as a

We have integer items

Division method or reminder method takes an item and divides it by the table size and returns the remainder as its hash value

The simplest method is called Linear Probing. Formula to compute linear probing is:

In this technique we attach a Linked List with each index also known as a Bucket to store the values.

**Hash Table**uses an array as a storage medium and uses hash technique to generate an index where an element is to be inserted or is to be located from.**Hashing**is the process of mapping large amount of data item to smaller table with the help of hashing function. It is also known as*Hashing Algorithm*or*Message Digest Function*.- It is a technique to convert a range of key values into a range of indexes of an array.
- It is used to facilitate the next level searching method when compared with the linear or binary search.
- Hashing allows to update and retrieve any data entry in a constant time
**O(1)**. - Constant time O(1) means the operation does not depend on the size of the data.
- Hashing is used with a database to enable items to be retrieved more quickly.
- It is used in the encryption and decryption of digital signatures.

Hash Function |

A fixed process converts a key to a hash key is known as a

**Hash Function**.- This function takes a key and maps it to a value of a certain length which is called a Hash value or Hash.
- Hash value represents the original string of characters, but it is normally smaller than the original.
- It transfers the digital signature and then both hash value and signature are sent to the receiver. Receiver uses the same hash function to generate the hash value and then compares it to that received with the message.
- If the hash values are same, the message is transmitted without errors.

__Example__We have integer items

*{26, 70, 18, 31, 54, 93}*. One common method of determining a hash key is the division method of hashing and the formula is :

*Hash Key = Key Value % Number of Slots in the Table*Division method or reminder method takes an item and divides it by the table size and returns the remainder as its hash value

__Linear Probing__- Take the above example, if we insert next item 40 in our collection, it would have a hash value of 0 (40 % 10 = 0). But 70 also had a hash value of 0, it becomes a problem. This problem is called as Collision or Clash. Collision creates a problem for hashing technique.
- Linear probing is used for resolving the collisions in hash table, data structures for maintaining a collection of key-value pairs.
- It is a component of open addressing scheme for using a hash table to solve the dictionary problem.

The simplest method is called Linear Probing. Formula to compute linear probing is:

*P = (1 + P) % (MOD) Table_size*__Separate Chaining__In this technique we attach a Linked List with each index also known as a Bucket to store the values.

**5. Explain various Graph Traversal Techniques. Write DFS Algorithm and analyze running time algorithm.**
ANS:

Processing a graph requires the ability to traverse the graph. Traversing a graph is similar to traversing a binary tree, except that traversing a graph is a bit more complicated.Graphs can also be undirected or directed, cyclic or acyclic (mostly directed), or weighted.

Two algorithms are generally used for the traversal of a graph:

Processing a graph requires the ability to traverse the graph. Traversing a graph is similar to traversing a binary tree, except that traversing a graph is a bit more complicated.Graphs can also be undirected or directed, cyclic or acyclic (mostly directed), or weighted.

__Traversing a graph__Two algorithms are generally used for the traversal of a graph:

- Depth first search (DFS) and,
- Breadth first search (BFS).

**Depth-first Search (DFS)**is an algorithm for searching a graph or tree data structure. The algorithm starts at the root (top) node of a tree and goes as far as it can down a given branch (path), and then backtracks until it finds an unexplored path, and then explores it. The algorithm does this until the entire graph has been explored.__DFS follows the following Algorithm:__- Select an unvisited node s, visit it, and treat as the current node
- Find an unvisited neighbor of the current node, visit it, and make it the new current node;
- If the current node has no unvisited neighbors, backtrack to the its parent, and make that the new current node; Repeat the above two steps until no more nodes can be visited.
- If there are still unvisited nodes, repeat from step 1.

DFS |

__DFS Time Complexity:__- Every node is visited once. Also, every edge (x,y) is "crossed" twice: one time when node y is checked from x to see if it i;s visited (if not visited, then y would be visited from x), and another time, when we back track from y to x.
- Therefore, the time of DFS is
**O(n+|E|)**. - If the graph is connected, the time is O(|E|) because the graph has at least n-1 edges, and so n+|E| <= 2|E| -1, implying that n+|E) is O(|E|).

**Breadth First Search (BFS)**algorithm traverses a graph in a breadth-ward motion and uses a queue to remember to get the next vertex to start a search, when a dead end occurs in any iteration.__BFS follows the following Algorithm:__- Select an unvisited node s, visit it, have it be the root in a BFS tree being formed. Its level is called the current level.
- From each node x in the current level, in the order in which the level nodes were visited, visit all the unvisited neighbors of x. The newly visited nodes from this level form a new level that becomes the next current level.
- Repeat the previous step until no more nodes can be visited.
- If there are still unvisited nodes, repeat from Step 1.

BFS |

__BFS Time Complexity:__- Every node is visited once. Also, every edge (x,y) is "crossed" once when node y is checked from x to see if it is visited (if not visited, then y would be visited from x).
- Therefore, the time of BFS is
**O(n+|E|)**.