• Python Ethical Hacking


    VULNERABILITY_SCANNER

    How to discover a vulnerability in a web application?

    1. Go into every possible page.

    2. Look for ways to send data to web application(URL + Forms).

    3. Send payloads to discover vulnerabilities.

    4. Analyze the response to check of the website is vulnerable.

    ->General steps are the same regardless of the vulnerability.

    Class Scanner.

    #!/usr/bin/env python
    
    import requests
    import re
    from urllib.parse import urljoin
    
    
    class Scanner:
        def __init__(self, url):
            self.target_url = url
            self.target_links = []
    
        def extract_links_from(self, url):
            response = requests.get(url)
            return re.findall('(?:href=")(.*?")', response.content.decode())
    
        def crawl(self, url):
            href_links = self.extract_links_from(url)
            for link in href_links:
                link = urljoin(url, link)
    
                if "#" in link:
                    link = link.split("#")[0]
    
                if self.target_url in link and link not in self.target_links:
                    self.target_links.append(link)
                    print(link)
                    self.crawl(link)

    Vulnerability scanner.

    #!/usr/bin/env python
    
    import scanner
    
    target_url = "http://10.0.0.45/mutillidae/"
    vuln_scanner = scanner.Scanner(target_url)
    vuln_scanner.crawl(target_url)

    The Python program runs fine.

    Polish the Python code using Default Parameters.

    Class Scanner.

    #!/usr/bin/env python
    
    import requests
    import re
    from urllib.parse import urljoin
    
    
    class Scanner:
        def __init__(self, url):
            self.target_url = url
            self.target_links = []
    
        def extract_links_from(self, url):
            response = requests.get(url)
            return re.findall('(?:href=")(.*?")', response.content.decode())
    
        def crawl(self, url=None):
            if url == None:
                url = self.target_url
            href_links = self.extract_links_from(url)
            for link in href_links:
                link = urljoin(url, link)
    
                if "#" in link:
                    link = link.split("#")[0]
    
                if self.target_url in link and link not in self.target_links:
                    self.target_links.append(link)
                    print(link)
                    self.crawl(link)

    Vuln_scanner:

    #!/usr/bin/env python
    
    import scanner
    
    target_url = "http://10.0.0.45/mutillidae/"
    vuln_scanner = scanner.Scanner(target_url)
    vuln_scanner.crawl()
    相信未来 - 该面对的绝不逃避,该执著的永不怨悔,该舍弃的不再留念,该珍惜的好好把握。
  • 相关阅读:
    Spring Boot+Vue前后端分离(1):准备环境 +项目启动
    Git使用SSH协议clone项目及SSH简介
    Intelij ideas windows版本快捷键整理
    Spring Boot配置—— yml 配置map
    Nginx配置——多站点配置
    String分割
    Spring日志处理——logger占位符
    Java缓存经验
    maven项目——maven跳过单元测试maven.test.skip和skipTests的区别
    Java工具类——Hutool Java
  • 原文地址:https://www.cnblogs.com/keepmoving1113/p/11707593.html
Copyright © 2020-2023  润新知