• 论文阅读笔记StyleCLIP: TextDriven Manipulation of StyleGAN Imagery


    combine CLIP with StyleGAN

    一.introduction and related work

    1、CLIP主要完成的任务是:给定一幅图像,在32768个随机抽取的文本片段中,找到能匹配的那个文本。为了完成这个任务,CLIP这个模型需要学习识别图像中各种视觉概念,并将视觉概念将图片关联,也因此,CLIP可以用于几乎任意视觉人类任务。例如,一个数据集的任务为区分猫和狗,则CLIP模型预测图像更匹配文字描述“一张狗的照片”还是“一张猫的照片”。

    2、text prompt 文本提示

    3、related work about  image manipulation base on text-guided 

    Some  methods  [10,  31,  27]  use  a  GAN-based  encoder-decoder architecture, to disentangle the semantics of both input images and text descriptions.  ManiGAN [22] introduces a novel text-image combination module, which produces high-quality images. 

    A  concurrent  work  to  ours,  TediGAN  [51],  also  uses StyleGAN for text-guided image generation and manipulation. 

    [10]  H. Dong, Simiao Yu, Chao Wu, and Y. Guo. Semantic imagesynthesis via adversarial learning.Proc. ICCV, pages 5707–5715, 2017

    [27]Yahui Liu, Marco De Nadai, Deng Cai, Huayang Li, XavierAlameda-Pineda,  N.  Sebe,  and  Bruno  Lepri.Describewhat to change: A text-guided unsupervised image-to-imagetranslation approach.Proceedings of the 28th ACM Interna-tional Conference on Multimedia, 2020

    [31]Seonghyeon  Nam,  Yunji  Kim,  and  S.  Kim.   Text-adaptivegenerative adversarial networks:  Manipulating images withnatural language. InNeurIPS, 2018

    4、While most works perform image manipulations in the W or W+ spaces,  Wuet  al.  [50]  proposed  to  use  the StyleSpace S, and showed that it is better disentangled than W and W+

     Our latent optimizer and mapper work in the W+ space, while the input-agnostic directions that we detect are in S.

    二.contributions

    In this work we explore three ways for text-driven image manipulation:

    1.We first introduce an optimization scheme that utilizes a CLIP-based loss to modify an input latent vector in response to a user-provided text prompt.

    2.we describe a latent mapper  that  infers  a  text-guided  latent  manipulation  step  fora given input image, allowing faster and more stable text-based manipulation.

    3.Finally, we present a method for mapping  a  text  prompts  to  input-agnostic  directions  in  Style-GAN’s style space,  enabling interactive text-driven image manipulation. 

    中文:

    Latent Optimization: 将CLIP作为loss网络,这是最通用的方法,但是修改一张图片需要好几分钟。
    Latent Mapper:固定文本提示,以待修改的图片作为起点,Mapper推理根据文本提示该如何修改图片,然后对图片进行修改。
    Global Direction:与方法2类似,将文本提示映射到StyleGAN的‘style’空间,从而修改图像。

     三.method

     

     

     

  • 相关阅读:
    react-flux的使用(2018/12/16)
    react-redux 的使用*(2018/12/17)
    小程序推送消息(Template)
    小程序富文本照片左右滚动
    前端自动化工具
    拾色器前端工具
    微信小程序 摇一摇
    小程序在线阅读文档
    配置JDK环境变量
    小程序 获取前几名加样式
  • 原文地址:https://www.cnblogs.com/h694879357/p/15525192.html
Copyright © 2020-2023  润新知