As I was starting my SEO and digital marketing journey, I often found myself having trouble setting up and using the many available tools. I remember being up in all arms about it, confused by all the different terms, sophisticated charts, and dizzying statistics.
Trying to strike a balance between learning the software in tandem with SEO concepts is hard, and you will often find yourself spread too thin, not only in saving time but also processing power.
That is why as a now more experienced professional, I aim to provide some form of help to the beginners who would want to spend more time learning the more important concepts of SEO.
To utilize that newfound SEO knowledge, you need to have a firm grasp on the software you are using. That is where I would like to guide you about one such tool called ‘Screaming Frog.’
In this post, I will be tackling what Screaming Frog is, how to set it up, and how to run a crawl, to ensure you get the data you need for your technical SEO audit.
What is Screaming Frog?
Google evaluates your site through a process commonly known in the digital niche as “crawling.” It may sound weird, though, but it fits once you think about it.
Google assigns a bot on your site to ‘crawl’ your content. What that means is that a bot will examine your page by going around it, finding linked pages on your website, and then following them.
Now I guess “follow” would be a more fitting term, but “crawling” has stuck. And admittedly, it sounds way cooler.
Now anyway, back to the topic.
Now, realize that there are millions of websites, and Google cannot put all of them on the SERPs. Most are wondering why their sites and articles are not added to Google’s web search.
Well, some factors play essential roles in whether your site and content get indexed or not. Examining all is laborious, and webmasters have little insight on how Google goes about crawling around your site.
Now, here is where Screaming Frog comes in. The Screaming Frog SEO spider tool is a website crawler that allows you to crawl websites just as a Google bot would.
The tool will retrieve critical on-site data and organize it into understandable stats and reports. This allows the easy identification and evaluation of changes to the site for use in a technical SEO and website audit.
Screaming Frog allows you to:
- Find broken links for you to fix
- Find temporary and permanent redirects
- Analyze metadata
- Find duplicate content
- Review robots.txt and other directives
- Create XML sitemaps
- Evaluate site architecture, among others.
Factors that affect website crawling
As I said earlier, crawling a website is affected by many different factors. Here are the most prominent of them.
Domain name

The crawling rate is higher for the domains that have good traffic and authority. The Google Panda update has given significant importance to domain names.
Domain names that have the main keyword in them are given priority over those that do not have the main keywords.
Relevant: Also read the review of GoDaddy here.
Backlinks

If you have decent rankings but little to no backlinks to your site, then crawlers would assume that your content is of low quality and value.
An underlying assumption of search engines is that the more backlinks your site have, the more reputable your website is.
Internal linking

Internal linking is the practice of linking one page on a site to another page within the same site. They are useful for navigation, information hierarchy, and link equity.
Internal linking is a good practice not only for SEO rankings but also for maintaining active users. Who would want to stay in a hard to navigate site?
There are numerous discussions on internal linking and anchor text usage, but there is no universally correct way. What is essential is to recognize that internal linking is a good practice to adopt into your site.
XML sitemap

A well-built XML sitemap is a sort of roadmap that leads Google to all of your relevant pages. Sometimes, some pages will end up with no internal links going to them, making them quite challenging to locate.
An XML sitemap lists all of the pages of a website and informs Google that the site has been updated and will make the bot want to crawl the site.
URL Canonicalization

If you have one page accessible by multiple URLs or different pages with more or less the same content, then Google may interpret these as duplicate pages. The bot will choose one site to declare as the original (canon) version and crawl that, while the other pages are reported as duplicates and are crawled less often.
✋ Stop worrying about SEO and have me do it for you

PS: Ready to work with the 0.01% of all SEOs worldwide? Click here.
Choosing a canonical URL will specify the page that will be shown on search engine results, and will consolidate it with duplicate pages. This is essential for proper SEO.
Meta Tags

Meta tags are the snippets of text that outline and describe the content of a page. The photo above is a meta description tag; there are other meta tags not visible on a search engine and only through a site’s source code.
To rank in search engines, ensure that you have non-competitive but unique tags.
With Screaming Frog, you can swiftly crawl, analyze, and do on-site audits for websites of all sizes. Key on-site SEO elements can be exported through CSV files and opened on separate spreadsheet software.
Where to begin?
Installation

To start, you will need to, of course, install the tool on your computer. It is available on Microsoft, Mac, and Linux, and is free for crawling up to 500 URLs at a time.
If you want to remove the URL limit and get access to more premium features, then you can purchase a license.
License activation
If you wish to use the fully unlocked version, then you will have to buy a license. When you buy one, you will be provided with a license key which should be entered under “License > Enter License.”
When entered correctly, a dialog box will say that the license key is valid, and will show an expiry date.
Familiarize the user interface
Before touching anything, I suggest familiarizing yourself with the tabs, settings, and menus.
- File. Under the ‘File’ menu, you can save your crawls as files on the event that it did not finish. If you forget to save one, then you can access the last six crawls you performed under this tab.
- Configuration. This can be considered the most important menu to familiarize yourself with. Click on ‘Spider’ to change the customizations based on the parts you want crawled and the data you want to see.
- Bulk Export. This menu allows you to export addresses with particular response codes, inlinks, directives, images, and more.
- Reports. The reports menu lets you download an overview of your crawling session as well as reports on a specific set of data such as canonical errors and redirect links.
- Sitemaps. The sitemaps menu lets you construct a sitemap for your site.
Setting up your device
Setting up memory and storage
As you start crawling sites, you will notice that some websites are larger than others, and as such, require more memory and processing power. If you want to go straight to site crawling, or are using the limited free version, then feel free to ignore this step.
But if you want to crawl lots and lots of sites, then it would be wise to allocate more of your RAM and processing power to Screaming Frog.
If you have an SSD, then it is suggested to switch to database storage mode through ‘Configuration > System > Storage’ and choose the Database Storage Mode option.
If not, then you can stick with the RAM storage mode, with default settings of 1 GB RAM for 32-bit machines and 2GB RAM for 64-bit variants. Screaming Frog dedicates a total GB number that is 2 GB lower than your full capacity to prevent freezing and crashes.
When you are done allocating more RAM, restart the software for the changes to apply.
Adjusting configurations
You do not need to make hefty changes to Screaming Frog as the SEO spider is already set up to crawl sites like how Google does it. There are, however, a wealth of ways to configure your crawl and get only the data that you need, not only saving time but also processing power.
If you are crawling a site utilizing JavaScript, then you can use the JavaScript rendering mode by going to ‘Configuration > Spider > Rendering.’ JavaScript will then be executed so that the spider tool can then crawl the HTML.
The different configuration options will need a separate detailed technical post. For now, though, know that the basic built-in SEO spider configuration is enough for many of your needs.
Using Screaming Frog
So now that you know how to set up the memory, storage, and configurations, as well as are familiarized with the interface, then it is time to use the tool to crawl a site.
Start a crawl
There are two modes for crawling a site: the default ‘Spider’ mode which crawls a single website, and the ‘List’ mode which will crawl a list of URLs you can enter through copy and paste.
Start a regular spider crawl by entering the homepage URL into the field and click start.

The crawl updates in real-time and metrics such as speed and number of URLs are shown through the status bar at the bottom. If you want to crawl a list of URLs rather than the whole site, then click on ‘Mode > List’ to either upload or copy and paste a list of URLs.
When crawling, take note that:
- You can save in a moment (only with a license).
- You can stop and resume the crawl when you want later,
- Exiting Screaming Frog or turning off your device would lead to a loss of unsaved data,
- You can save the crawl, exit the program, and resume from the saved spot later (only with a license).
Viewing crawl data
There will be tabs that focus on different elements. Each tab has filters that divide data by types, and there are tabs for potential errors discovered.
You can click on the URLs at the top, which will show tabs at the bottom half of the screen. Click on these bottom tabs to populate the lower panel.
Viewing potential issues
On the right-hand window of the screen, there is an ‘Overview’ tab, which is a summary of crawl data within each of the other tabs. You can use this to view errors and issues without the need of clicking into each tab.
You can click on the entries on the tab to be taken to the relevant tab and filter directly.
Exporting the data
From the crawl, you can export the data into spreadsheet software. Simply click the ‘Export’ button to export the data from the upper tabs and filters.
To export lower window data, right click the URLs you wish to export data from, then choose one of the four options: Inlinks, Outlinks, Image Details, and Resources.
The bulk export menu mentioned earlier allows you to export the source links such as response codes, canonicals, directives, etc.
Saving and opening crawls
As stated earlier, you can only save and resume crawls when you have a license. In the default memory mode, you can save a crawl anytime while paused or if it is finished and re-opening it.
In database storage mode (if you opted into it), crawls are saved automatically.
The resulting window will display the automatically stored crawls, which you can then organize, duplicate, export, or delete.
Wrapping up!
And there you have it! After you have the data, it is time to give context to them through a technical SEO audit; that is not covered by this post, though.
It is important to know that Screaming Frog is NOT a technical SEO audit tool; it is merely software to use to help you gather the data and reports needed for a technical SEO audit.
I hope that this article will help you set up Screaming Frog for yourself and to execute your first crawl.
Want to know more SEO tools? Check out these reviews I did recently.
- Getting to know Surfer SEO
- What is BuzzSumo and is it good for your business?
- Exploring Ahrefs as an SEO tool
How was your experience using Screaming Frog? Let me know in the comments below!