• 人工智能


    人工只能包括:哲学 、数学、经济学、心理学、神经科学、计算机工程、控制理论及控制论、语言学

    AI的主要方法:控制论与人脑仿真、符号与亚符号、基于逻辑与反逻辑、符号主义与联结主义、统计学、以及智能体范例。

    一个智能体是感知并作用于外部环境的任何事物。

    典型的智能体:简单反射智能体、基于模型的反射智能体、基于目标的智能体、基于效用的智能体、以及学习智能体。

    Solving a problem is a sequence of actions, and search is the process of looking for the actions to reach the goal.

    求解一个问题就是一系列动作,并且搜索是为达到目标寻找这些动作的过程。

    Uninformed search is also called blind search: the typical algorithms are Breadth-first, Uniform-cost, and Depth-first.

    无信息搜索亦被称为盲目搜索:其代表性算法是宽度优先、一致代价、以及深度优先。

    Informed search is also known as heuristic search: Best-first search is according to an evaluation function, Its special cases are Greedy Search and A* search.

    有信息搜索也被称为启发式搜索:最佳优先搜索依赖于评价函数,其特例是贪婪搜索和A*搜索。

    Time and space complexity are some key points for a search algorithm.

    时间和空间复杂性是搜索算法的一些关键点。

    --------------------------------

    Local search: Hill-Climbing operate on complete-state formulations; Local Beam keeps track of k states; Tabu Search uses a neighborhood search with some constraints.

    局部搜索:爬山法在完整状态形式化上进行操作;局部束搜索法保持k个状态的轨迹;禁忌搜索采用一种带约束的邻域搜索。

    Optimization and Evolutionary Algorithms: Simulated Annealing approximate global optimization in a large search space; Genetic Algorithm mimics the evolutional process of natural selection.

    优化与进化算法:模拟退火在大搜索空间逼近全局最优解;遗传算法模仿自然选择的进化过程。

    Swarm Intelligence: Ant Colony Optimization can be reduced to finding good paths through graphs; Particle Swarm Optimization is by iteratively trying to improve a candidate solution.

    群体智能:蚁群优化可以寻找图的最好路径;粒子群优化通过迭代来改善一个候选解。

    ----------------------------------------

    Minimax algorithm can select optimal moves by a depth-first enumeration of game tree.

    Minimax算法可以通过博弈树的深度优先计算选择最佳移动。

    Alpha–beta algorithm achieves much greater efficiency by pruning irrelevant subtrees.

    Alpha–beta算法通过剪掉不相关子树来得到更高的效率。

    Heuristic evaluation function is useful for imperfect real-time decisions of games.

    启发式评价函数对于博弈的不完全实时决策很有效。

    Stochastic game is a dynamic game with probabilistic transitions.

    随机博弈是具有概率转换的动态博弈。

    Monte-Carlo tree search combines Monte-Carlo simulation with game tree search. 

    蒙特卡罗树搜索将蒙蒙特卡罗树仿真与博弈树搜索相结合。

    ----------------------------

    CSPs represent a state with a set of variable/value pairs and represent the conditions by a set of constraints on the variables.

    CSPs问题用一组变量/值对表示状态,并且用一组变量的约束表示条件。

    Node, arc, path, and k-consistency use constraints to infer which variable/value pairs are consistent. 

    节点、弧、路径、以及k一致性使用约束来推断哪个变量/值对是一致的。

    Backtracking search, and local search using min-conflicts heuristic are applied to CSPs.

    回溯搜索以及采用最少冲突启发式的局部搜索被用于CSPs。

    Cutset conditioning and tree decomposition can be used to reduce a constraint graph to a tree structure.

    割集调节和树分解可被用于将约束图简化为树结构。

    --------------------------

    Knowledge representation captures information. Its typical methods are semantic network, first order logic, production system, ontology and Bayesian network.

    知识表示捕捉信息。其代表性的方法是:语义网络、一阶逻辑、产生式系统、本体和贝叶斯网络。

    Ontological engineering is to study the methods and methodologies for building ontologies.

    本体工程是研究构建本体的方法和方法学。

    Uncertain knowledge can be handled by probability theory, utility theory and decision theory. 

    不确定性知识可以用概率论、效用论和决策论来处理。

    Bayesian networks can represent essentially any full joint probability distribution and in many cases can do so very concisely.

    贝叶斯网络基本上可以表示任意的全联合概率分布,并且在许多情况下可以做的非常简洁。

    -----------------------------

    Classical planning is the simplest planning. 

    经典规划是最简单的规划。

    Planning graph, Boolean satisfiability, first-order logical deduction, constraint satisfaction, and plan refinement can be used.

    可使用规划图、布尔可满足性、一阶逻辑推理、约束满足, 和规划精进方法。

    Planning and acting in the real world are more complex. 

    现实世界的规划与动作更为复杂。

    The representation language and the way of agent interacts with environment should be extended.

    应当扩展表示语言、以及智能体与外部环境交互的方式。

    For a problem of decision-theoretic planning, Markov Decision Process and dynamic programming can be used to formulate and solve it.

    对于决策理论规划问题,可使用马尔科夫决策过程和动态规划对其进行形式化和求解。

    -------------------------------

    Machine learning is to study some algorithms that can learn from and make predictions on data. 

    机器学习是研究一些可以从数据中学习、并对数据进行预测的算法。

    The different perspectives are aimed to try to have a taxonomy on the algorithms of machine learning, for being easy to understand machine learning. 

    几个不同视角旨在尝试对机器学习的算法进行分类,以便于理解机器学习。

    Three perspectives on machine learning are proposed in this chapter, those are learning tasks, learning paradigms and learning models.

    本章提出了机器学习的三个视角,他们是:学习任务、学习范例以及学习模型。

    --------------------------------

    The learning tasks are the general problems that can be solved with machine learning, and each task can be achieved by various algorithms but not a specific one.

    学习的任务是可以用机器学习求解的一些通用性问题,并且每个任务可以用不同的算法来实现,而不是特定的一个。

    The typical tasks in machine learning include: Classification, Regression, Clustering, Ranking, Density estimation, and Dimensionality Reduction.

    机器学习中的代表性任务包括:分类、回归、聚类、排名、密度估计、以及降维。

    -------------------------------------------

    机器学习的范式用于区分机器学习中不同的原型。一个学习范式就是刻画一种学习的原型,基于对学习的经验、或者与环境的交互。

    对范式的类别进行研究的意义在于,为了使学习任务能够达到好的效果,会使你思考去选择一个合适的范式;反过来,了解了学习范式的类别之后,会加深对学习任务的理解。

    这一章我们着重学习了三种代表性的范式:

      - 有监督学习:是一种“样本学习(learning-by-examples)“的范式

      - 无监督学习:是一种“自我学习(learning-by-itself)”的范式

      - 强化学习:是一种“在线学习(online-learning)”的范式。

    此外,还简单介绍了其它几种范式:

      - Ensemble(集成式):将大量的弱学习器组成一个强学习器

      - Learning to learn(学会学习):基于先前的经验学会自身的归纳性偏向。

      - Transfer(迁移式):关注于已学习的知识并将其用于不同但相关的问题

      - Adversarial learning(对抗式):以一种对抗性方式(即零和博弈)来生成满足某种分布的数据。

      - Collaborative learning(协同式):以一种非对抗的协同方式(即双赢)来获取所期待的结果

    ---------------------------------------------------------------------

  • 相关阅读:
    [转]
    Linux
    [转]
    [转]
    Linux 高级网络编程
    [转]
    [译]- 6-1 排列窗体上的控件(Laying Out Widgets on a Form)
    [转]
    [转]
    the thread has exited with code -1073741819
  • 原文地址:https://www.cnblogs.com/liuqifeng/p/9209946.html
Copyright © 2020-2023  润新知