• Dynamic Programming Introduction


    Optimal substructure

    In computer science, a problem is said to have optimal substructure if an optimal solution can be constructed efficiently from optimal solutions of its subproblems. This property is used to determine the usefulness of dynamic programming and greedy algorithms for a problem.

    Typically, a greedy algorithm is used to solve a problem with optimal substructure if it can be proved by induction that this is optimal at each step. Otherwise, providing the problem exhibits overlapping subproblems as well, dynamic programming is used. If there are no appropriate greedy algorithms and the problem fails to exhibit overlapping subproblems, often a lengthy but straightforward search of the solution space is the best alternative.

    Overlapping subproblems

    In computer science, a problem is said to have overlapping subproblems if the problem can be broken down into subproblems which are reused several times or a recursive algorithm for the problem solves the same subproblem over and over rather than always generating new subproblems.

    For example, the problem of computing the Fibonacci sequence exhibits overlapping subproblems. The problem of computing the nth Fibonacci number F(n), can be broken down into the subproblems of computing F(n − 1) and F(n − 2), and then adding the two. The subproblem of computing F(n − 1) can itself be broken down into a subproblem that involves computing F(n − 2). Therefore the computation of F(n − 2) is reused, and the Fibonacci sequence thus exhibits overlapping subproblems.

    A naive recursive approach to such a problem generally fails due to an exponential complexity. If the problem also shares an optimal substructure property, dynamic programming is a good way to work it out.

    Dynamic programming

    Not to be confused with Dynamic programming language.

    In mathematics, computer science, economics, and bioinformatics, dynamic programming is a method for solving complex problems by breaking them down into simpler subproblems. It is applicable to problems exhibiting the properties of overlapping subproblems and optimal substructure. When applicable, the method takes far less time than naive methods that don't take advantage of the subproblem overlap (like depth-first search).

    The idea behind dynamic programming is quite simple. In general, to solve a given problem, we need to solve different parts of the problem (subproblems), then combine the solutions of the subproblems to reach an overall solution. Often when using a more naive method, many of the subproblems are generated and solved many times. The dynamic programming approach seeks to solve each subproblem only once, thus reducing the number of computations: once the solution to a given subproblem has been computed, it is stored or "memo-ized": the next time the same solution is needed, it is simply looked up. This approach is especially useful when the number of repeating subproblems grows exponentially as a function of the size of the input.

    Dynamic programming algorithms are used for optimization (for example, finding the shortest path between two points, or the fastest way to multiply many matrices). A dynamic programming algorithm will examine the previously solved subproblems and will combine their solutions to give the best solution for the given problem. The alternatives are many, such as using a greedy algorithm, which picks the locally optimal choice at each branch in the road. The locally optimal choice may be a poor choice for the overall solution. While a greedy algorithm does not guarantee an optimal solution, it is often faster to calculate. Fortunately, some greedy algorithms (such as minimum spanning trees) are proven to lead to the optimal solution.

    For example, let's say that you have to get from point A to point B as fast as possible, in a given city, during rush hour. A dynamic programming algorithm will look at finding the shortest paths to points close to A, and use those solutions to eventually find the shortest path to B. On the other hand, a greedy algorithm will start you driving immediately and will pick the road that looks the fastest at every intersection. As you can imagine, this strategy might not lead to the fastest arrival time, since you might take some "easy" streets and then find yourself hopelessly stuck in a traffic jam.

    Sometimes, applying memoization to a naive basic recursive solution already results in an optimal dynamic programming solution; however, many problems require more sophisticated dynamic programming algorithms. Some of these may be recursive as well but parametrized differently from the naive solution. Others can be more complicated and cannot be implemented as a recursive function with memoization. Examples of these are the two solutions to the Egg Dropping puzzle below.

  • 相关阅读:
    CC学iOS杂记 001_Device
    低字节序和高字节序相互转换(Little Endian/Big Endian)
    wpf 控件复制 克隆
    压缩图片
    网络流转换为Byte数组
    JS屏蔽右键菜单,复制,粘帖xxxxx........
    记录详细错误信息
    Media Queries详解--转
    解决 asp.net 伪静态 IIS设置后 直正HTML无法显示的问题
    对C#泛型实例化对像--转
  • 原文地址:https://www.cnblogs.com/winscoder/p/4145710.html
Copyright © 2020-2023  润新知