• Python Ethical Hacking


    VULNERABILITY_SCANNER

    How to discover a vulnerability in a web application?

    1. Go into every possible page.

    2. Look for ways to send data to web application(URL + Forms).

    3. Send payloads to discover vulnerabilities.

    4. Analyze the response to check of the website is vulnerable.

    ->General steps are the same regardless of the vulnerability.

    Class Scanner.

    #!/usr/bin/env python
    
    import requests
    import re
    from urllib.parse import urljoin
    
    
    class Scanner:
        def __init__(self, url):
            self.target_url = url
            self.target_links = []
    
        def extract_links_from(self, url):
            response = requests.get(url)
            return re.findall('(?:href=")(.*?")', response.content.decode())
    
        def crawl(self, url):
            href_links = self.extract_links_from(url)
            for link in href_links:
                link = urljoin(url, link)
    
                if "#" in link:
                    link = link.split("#")[0]
    
                if self.target_url in link and link not in self.target_links:
                    self.target_links.append(link)
                    print(link)
                    self.crawl(link)

    Vulnerability scanner.

    #!/usr/bin/env python
    
    import scanner
    
    target_url = "http://10.0.0.45/mutillidae/"
    vuln_scanner = scanner.Scanner(target_url)
    vuln_scanner.crawl(target_url)

    The Python program runs fine.

    Polish the Python code using Default Parameters.

    Class Scanner.

    #!/usr/bin/env python
    
    import requests
    import re
    from urllib.parse import urljoin
    
    
    class Scanner:
        def __init__(self, url):
            self.target_url = url
            self.target_links = []
    
        def extract_links_from(self, url):
            response = requests.get(url)
            return re.findall('(?:href=")(.*?")', response.content.decode())
    
        def crawl(self, url=None):
            if url == None:
                url = self.target_url
            href_links = self.extract_links_from(url)
            for link in href_links:
                link = urljoin(url, link)
    
                if "#" in link:
                    link = link.split("#")[0]
    
                if self.target_url in link and link not in self.target_links:
                    self.target_links.append(link)
                    print(link)
                    self.crawl(link)

    Vuln_scanner:

    #!/usr/bin/env python
    
    import scanner
    
    target_url = "http://10.0.0.45/mutillidae/"
    vuln_scanner = scanner.Scanner(target_url)
    vuln_scanner.crawl()
    相信未来 - 该面对的绝不逃避,该执著的永不怨悔,该舍弃的不再留念,该珍惜的好好把握。
  • 相关阅读:
    BOI 2002 双调路径
    BOI'98 DAY 2 TASK 1 CONFERENCE CALL Dijkstra/Dijkstra+priority_queue/SPFA
    USACO 2013 November Contest, Silver Problem 2. Crowded Cows 单调队列
    BOI 2003 Problem. Spaceship
    USACO 2006 November Contest Problem. Road Blocks SPFA
    CEOI 2004 Trial session Problem. Journey DFS
    USACO 2015 January Contest, Silver Problem 2. Cow Routing Dijkstra
    LG P1233 木棍加工 动态规划,Dilworth
    LG P1020 导弹拦截 Dilworth
    USACO 2007 February Contest, Silver Problem 3. Silver Cow Party SPFA
  • 原文地址:https://www.cnblogs.com/keepmoving1113/p/11707593.html
Copyright © 2020-2023  润新知