**Matrix Chain Multiplication**

Given a series of matrices , we wish to compute . In order for the problem to be well defined, we assume that the dimensions of the matrices match, i.e. is for some .

Suppose we start multiplying naively from the left, then the total running time is . However observe that the order in which we choose to multiply might give us better or worse running times. Let us denote by (where ) the optimal running time for multiplying . Thus the entry we are interested in is . Consider the last multiplication in the computation for – it could be for any . Thus we get the following recurrence relation: .

**Order of filling the table**

Observe that the recurrence relation for is written in terms of entries such that . This gives us the order in which to fill the table, i.e. compute entries in the order of .

**Running Time**

Note that we need to fill entries and filling each entry requires us to multiply some appropriate matrix dimensions. If this is assumed to be then the total running time is .

**Independent Set**

Given a vertex weighted graph we wish to find a maximum weight independent set in . A set is called an independent set if for every , . We will solve this problem for the special case when is a tree.

Note that greedy is a bad idea here. Consider a star graph on vertices where the central vertex has weight and every other vertex has weight . The optimal solution is while greedy picks a solution of weight .

**Dynamic Programming Approach**

Let us pick a vertex as a root and think of as a rooted tree. Let denote the weight of the largest sub-tree rooted at vertex .Thus we are interested in the entry where is the root of the tree. Consider the entry . We have two possibilities: either the vertex is in the optimal set or it is not a part of the optimal set. Thus . We fill up the entries for the leaves first, then their parents and so on. We need to fill entries, while to compute an entry we may need operations, giving us an upper bound of . However this is a loose upper bound and a more careful analysis shows that it is in fact , since each vertex of the three is examined at most 3 times: when filling the table entry for it, for its parent or for its grandparent.

For the running time of Matrix Chain Multiplication, it states, “Note that we need to fill n^2 entries and filling each entry requires us to multiply some appropriate matrix dimensions. If this is assumed to be O(1)”. Why why assume filling each entry is O(1) when filling each entry requires us to look at B[i][k] and B[k+1][j] entries to find the minimum cost?