-
Few Shot Learning
**Few Shot Learning New approach** ---- In fact, a novel parameter updating method, replacing the backpropagation algorithm.
Let's begin from the math, $x$ be the input, $yleftarrow f(x| heta)$ is the inference model formulation, and $y$ is the ouput of AI system with parameter of $ heta$.
$R$ be the reward, given $Rleftarrow psi(y,hat{y})$, denotes that $R$ is derivated from $y$ and $hat{y}$, normally we see this as a reward function, which is fixed.
$left$ defined as the **Reward-Environment-Actor** triplet. This is similar to which described in reinforcement learning.
$Delta{ heta}leftarrow g(left|eta)$ is the gradient generator.
$g(cdot)$ becomes the core issue of the training of ANN, since I believe it's the **GRADIENTS** that **SHAPE** our model.
A more flexiable and delicated gradient generation algorithm is deserved of study.
The parameter updating equation remains the same: $ heta_{t+1}leftarrow heta_t + Delta{ heta_t} $.
Thus we have derived a novel gradient generator with parameter $eta$, the meta settings for a AI learning system, which makes it a **META LEARNING** problem.
For a meta learning task, it is necessary to consider about the generability of model, which refers to the robustness of the performance transferring between datasets or data splits.
Now, we obtain the overall optimization objective of a transferring between data splits $mathbb{D}_1$ and $mathbb{D}_2$ as follow,
$$ argmax_{eta}quad psi(f(x_{t+1}| heta_t + g(left|eta), hat{y}_{t+1}), forall x_tinmathbb{D}_1, x_{t+1}inmathbb{D}_2 $$
-
相关阅读:
问题解决-Plugin with id 'com.github.dcendents.android-maven' not found
hadoop 04 一 HA高可用配置
hadoop 03 一 Hadoop机架感知配置
Windows平台安装配置Hadoop
hadoop 02一 hadoop配置
hadoop 01一 hadoop安装配置
Centos7下载和安装教程
mysql 命令行导出数据
RabbitMQ 集群部署(linux-centos6.5)
Spring 集成RabbitMq
-
原文地址:https://www.cnblogs.com/thisisajoke/p/11287111.html
Copyright © 2020-2023
润新知