Join us
Link Extractor In Python
Extracting all links of a web page is a common task among web scrapers, it is useful to build advanced scrapers that crawl every page of a certain website to extract data, it can also be used for SEO diagnostics process or even information gathering phase for penetration testers. In this tutorial, you will learn how you can build a link extractor tool in Python from Scratch using only requests and BeautifulSoup libraries.
Letâs install the dependencies:
pip3 install requests bs4 colorama
Weâll be using requests to make HTTP requests conveniently, BeautifulSoup for parsing HTML, and colorama for changing text color.
Open up a new Python file and follow along, letâs import the modules we need:
import requests
from urllib.parse import urlparse, urljoin
from bs4 import BeautifulSoup
import colorama
We are going to use colorama just for using different colors when printing, to distinguish between internal and external links:
# init the colorama module
colorama.init()
GREEN = colorama.Fore.GREEN
GRAY = colorama.Fore.LIGHTBLACK_EX
RESET = colorama.Fore.RESET
YELLOW = colorama.Fore.YELLOW
We gonna need two global variables, one for all internal links of the website and the other for all the external links:
# initialize the set of links (unique links)
internal_urls = set()
external_urls = set()
Since not all links in anchor tags (tags) are valid (Iâve experimented with this), some are links to parts of the website, and some are javascript, so letâs write a function to validate URLs:
def is_valid(url):
"""
Checks whether `url` is a valid URL.
"""
parsed = urlparse(url)
return bool(parsed.netloc) and bool(parsed.scheme)
This will make sure that a proper scheme (protocol, e.g HTTP or HTTPS) and domain name exist in the URL.
Now letâs build a function to return all the valid URLs of a web page:
def get_all_website_links(url):
"""
Returns all URLs that is found on `url` in which it belongs to the same website
"""
# all URLs of `url`
urls = set()
# domain name of the URL without the protocol
domain_name = urlparse(url).netloc
soup = BeautifulSoup(requests.get(url).content, "html.parser")
First, I initialized the URLs set variable, Iâve used Python sets here because we donât want redundant links.
Second, Iâve extracted the domain name from the URL, we gonna need it to check whether the link we grabbed is external or internal.
Third, Iâve downloaded the HTML content of the web page and wrapped it with a soup
object to ease HTML parsing.
Letâs get all HTML tags (anchor tags that contains all the links to the web page):
for a_tag in soup.findAll("a"):
href = a_tag.attrs.get("href")
if href == "" or href is None:
# href empty tag
continue
So we get the href attribute and check if there is something there. Otherwise, we just continue to the next link.
Since not all links are absolute, we gonna need to join relative URLs with their domain name (e.g when href is â/searchâ and URL is âgoogle.comâ, the result will be âgoogle.com/searchâ):
# join the URL if it's relative (not absolute link)
href = urljoin(url, href)
Now we need to remove HTTP GET parameters from the URLs, since this will cause redundancy in the set, the below code handles that:
parsed_href = urlparse(href)
# remove URL GET parameters, URL fragments, etc.
href = parsed_href.scheme + "://" + parsed_href.netloc + parsed_href.path
Letâs finish up the function:
if not is_valid(href):
# not a valid URL
continue
if href in internal_urls:
# already in the set
continue
if domain_name not in href:
# external link
if href not in external_urls:
print(f"{GRAY}[!] External link: {href}{RESET}")
external_urls.add(href)
continue
print(f"{GREEN}[*] Internal link: {href}{RESET}")
urls.add(href)
internal_urls.add(href)
return URLs
All we did here is check:
Finally, after all, checks, the URL will be an internal link, we print it and add it to our URLs and internal_urls sets.
The above function will only grab the links of one specific page, what if we want to extract all links of the entire website? Letâs do this:
# number of urls visited so far will be stored here
total_urls_visited = 0def crawl(url, max_urls=30):
"""
Crawls a web page and extracts all links.
You'll find all links in `external_urls` and `internal_urls` global set variables.
params:
max_urls (int): number of max urls to crawl, default is 30.
"""
global total_urls_visited
total_urls_visited += 1
print(f"{YELLOW}[*] Crawling: {url}{RESET}")
links = get_all_website_links(url)
for link in links:
if total_urls_visited > max_urls:
break
crawl(link, max_urls=max_urls)
This function crawls the website, which means it gets all the links of the first page and then calls itself recursively to follow all the links extracted previously. However, this can cause some issues, the program will get stuck on large websites (that got many links) such as google.com, as a result, Iâve added a max_urls parameter to exit when we reach a certain number of URLs checked.
Alright, letâs test this, make sure you use this on a website youâre authorized to, otherwise Iâm not responsible for any harm you make.
if __name__ == "__main__":
crawl("https://www.thepythoncode.com")
print("[+] Total Internal links:", len(internal_urls))
print("[+] Total External links:", len(external_urls))
print("[+] Total URLs:", len(external_urls) + len(internal_urls))
print("[+] Total crawled URLs:", max_urls)
Iâm testing on this website. However, I highly encourage you not to do that, that will cause a lot of requests and will crowd the web server, and may block your IP address.
After the crawling finishes, itâll print total links extracted and crawled:
[+] Total Internal links: 90
[+] Total External links: 137
[+] Total URLs: 227
[+] Total crawled URLs: 30
Awesome, right? I hope this tutorial was a benefit for you to inspire you to build such tools using Python.
There are some websites that load most of their content using JavaScript, as a result, we need to use the requests_html library instead, which enables us to execute Javascript using Chromium.
Requesting the same website many times in a short period of time may cause the website to block your IP address, in that case, you need to use a proxy server for such purposes.
I edited the code a little bit, so you will be able to save the output URLs in a file, and also pass URLs from command line arguments.
Here is the source code of the article:-https://github.com/KoderKumar/Link-Extractor
Thank you for reading my article
And if you like it give me a follow.
Join other developers and claim your FAUN account now!
Author
@arth_kumar11Influence
Total Hits
Posts
Only registered users can post comments. Please, login or signup.