• Kali Linux之web安全扫描器skipfish使用


    0x00.skipfish简介

    谷歌公司出品的开源web程序评估软件。 

    skipfish特点:CPU资源占用低,扫描速度快,每秒可以轻松处理2000个请求,误报率低。

    1x00.skipfish使用

     1x01  帮助信息 

    root@kali:~# skipfish --help
        skipfish web application scanner - version 2.10b
        Usage: skipfish [ options ... ] -W wordlist -o output_dir start_url [ start_url2 ... ]
    
        Authentication and access options:
    
          -A user:pass      - use specified HTTP authentication credentials
          -F host=IP        - pretend that 'host' resolves to 'IP'
          -C name=val       - append a custom cookie to all requests
          -H name=val       - append a custom HTTP header to all requests
          -b (i|f|p)        - use headers consistent with MSIE / Firefox / iPhone
          -N                - do not accept any new cookies
          --auth-form url   - form authentication URL
          --auth-user user  - form authentication user
          --auth-pass pass  - form authentication password
          --auth-verify-url -  URL for in-session detection
    
        Crawl scope options:
    
          -d max_depth     - maximum crawl tree depth (16)
          -c max_child     - maximum children to index per node (512)
          -x max_desc      - maximum descendants to index per branch (8192)
          -r r_limit       - max total number of requests to send (100000000)
          -p crawl%        - node and link crawl probability (100%)
          -q hex           - repeat probabilistic scan with given seed
          -I string        - only follow URLs matching 'string'
          -X string        - exclude URLs matching 'string'
          -K string        - do not fuzz parameters named 'string'
          -D domain        - crawl cross-site links to another domain
          -B domain        - trust, but do not crawl, another domain
          -Z               - do not descend into 5xx locations
          -O               - do not submit any forms
          -P               - do not parse HTML, etc, to find new links
    
        Reporting options:
    
          -o dir          - write output to specified directory (required)
          -M              - log warnings about mixed content / non-SSL passwords
          -E              - log all HTTP/1.0 / HTTP/1.1 caching intent mismatches
          -U              - log all external URLs and e-mails seen
          -Q              - completely suppress duplicate nodes in reports
          -u              - be quiet, disable realtime progress stats
          -v              - enable runtime logging (to stderr)
    
        Dictionary management options:
    
          -W wordlist     - use a specified read-write wordlist (required)
          -S wordlist     - load a supplemental read-only wordlist
          -L              - do not auto-learn new keywords for the site
          -Y              - do not fuzz extensions in directory brute-force
          -R age          - purge words hit more than 'age' scans ago
          -T name=val     - add new form auto-fill rule
          -G max_guess    - maximum number of keyword guesses to keep (256)
    
          -z sigfile      - load signatures from this file
    
        Performance settings:
    
          -g max_conn     - max simultaneous TCP connections, global (40)
          -m host_conn    - max simultaneous connections, per target IP (10)
          -f max_fail     - max number of consecutive HTTP errors (100)
          -t req_tmout    - total request response timeout (20 s)
          -w rw_tmout     - individual network I/O timeout (10 s)
          -i idle_tmout   - timeout on idle HTTP connections (10 s)
          -s s_limit      - response size limit (400000 B)
          -e              - do not keep binary responses for reporting
    
        Other settings:
    
          -l max_req      - max requests per second (0.000000)
          -k duration     - stop scanning after the given duration h:m:s
          --config file   - load the specified configuration file
    
        Send comments and complaints to <heinenn@google.com>.

    1x02 

    • skipfish -o test [url]  #test为保存结果的文件名
    • skipfish -o test @url.txt #指定目标IP列表文件
    • skipfish -o test -S complet.wl -W abc.wl [url]  #-S load a supplemental read-only wordlist,-W  use a specified read-write wordlist (required)

    • -I 只检查包含´string´的 URL
    • -X 不检查包含´string´的URL
    • -K 不对指定参数进行 Fuzz 测试
    • -D 跨站点爬另外一个域
    • -l 每秒最大请求数
    • -m 每IP最大并发连接数
    • --config 指定配置文件

  • 相关阅读:
    WebAPI的文件上传与下载
    cefSharp框架中的C#方法和Web项目中的JS方法相互调用
    C# 引用类型的对象克隆(深拷贝)。
    C++模板特化
    一、JavaScript高级程序设计-----JavaScript简介
    二、C#图解教程第七章--类和继承
    C#IDIspose接口的使用
    CLR via C# 可空值类型
    WPF数据绑定
    计算机网路基础
  • 原文地址:https://www.cnblogs.com/iAmSoScArEd/p/10409288.html
Copyright © 2020-2023  润新知