Technical SEO is a set of optimizations related to the internal structure of a site that impact organic search results. The intention is that pages become faster, more understandable, crawlable and indexable. Even if you create amazing content with images/videos, links and so on, Google will not put your site at the top of search because the essential part is missing: Technical SEO, the base of the pyramid. Although the focus of technical SEO is to demonstrate to search engines how the site works, it also has the purpose of delivering the best user experience, and thus the company benefits from increased ROI (Return on Investment). In this way, all the investment in technical SEO, and in others, returns in the form of profit, and can be applied to other business strategies.
In the news article “Introduction to Technical SEO” we exemplified some practical SEO techniques about HTTPS, Schema, Web Vital Signs, Mobile, Browsers, Hreflangs, Canonization, TXT Robots and XML Sitemap, now let’s understand in depth, step by step, how these and other technical SEO parameters proceed.
First, before putting technical SEO into practice, it is necessary to audit the site to see if it has any errors, problems, points for improvement, and/or ideas for new content (as a result of keyword research).
The steps of a proper audit are:
Once the audit is done, it’s time to put technical SEO into practice, with the following tools:
Before you can even create an XML Sitemap you must make your site visible on Google by registering your domain/business, this is called Local SEO. In short you have to enter your complete URL here: https://www.google.com.br/intl/pt-BR/add_url.html, and create a Google My Business: https://www.google.com/intl/pt-PT_pt/business/ + a Google Maps for your company/website. Once these steps are done you can create the XML Sitemap in Google Search Control. Then the search engines can crawl the site/domain effectively and quickly.
This is the formula of a Sitemap in the HTML code, which will be in the header of a website:
<?xml version=”1.0″ encoding=”UTF-8″?>
Another system where you can create a Sitemap without having a CMS is in the free software “Screaming Frog”, (also very useful to see other technical SEO parameters) go to Mode > Spider, paste your homepage URL into the box labeled “Enter URL to spider” and hit “Start”, after the crawl is done, in the bottom right corner will be a tab saying “Completed + “number”, if that number is 499 or below this one, go to the Screaming Frog “XML Sitemap” and press “Next” to have it saved on your computer, then you can export it into SEO software like “Ahrefs” or even Google.
Once you have created the Sitemap, it has to go to Google. In Google Search Console there will be a tab called “Sitemap”, here you have to add/submit the URL of your Sitemap. Google will process it and give you the message that it was a “success”. In Google Search Console you can put several Sitemaps of the same website, in fact if a site is very extensive (for example, E-commerce), should be several Sitemaps listed by Categories, Posts, Pages, etc., and not just a single index of the general Sitemap.
However, there may be indexing errors in the XML Sitemap pages: server errors, redirect errors such as a redirect in a loop, URL blocking by the robots.txt file, URL blocking by the “noindex” tag or a non-existent URL (404 error). To identify these problems you can go to Google Search Console to check the reports on index coverage status. Each URL should be analyzed to correct the error preventing it from indexing.
Migration is one of the most challenging tasks of any SEO and can be done in several ways, depending on the momentary situation or the resourcefulness of each SEO. It is a process that requires a lot of planning, knowledge of the techniques and thorough analysis to minimize possible losses in organic search results, visits and revenue from the site. When it comes to moving a site, we should not do it in a hurry, so we should follow the following steps:
First, we need to understand how Googleboot crawling works:
The crawl budget is the speed and amount of pages that the search engine intends to crawl on a site. More crawling does not mean it will rank better, but if a site’s pages are not crawled and indexed, they will not rank at all. The crawl budget can be a concern for newer sites, especially those with many pages, because if it is not yet very popular, the search engine may not want to crawl it much. It can also be a concern for larger sites with many pages or sites that are not updated often.
An SEO should speed up the crawling of pages on a site, for this he should first identify which pages are getting this problem: in Google Search Console, in the “crawl statistics reports” tab – “flagged crawl status”, see the date and time when pages were last crawled, or you can access log analysis tools (Log Files) like Splunk for more complex log checks. Another thing you can do is to speed up the server and increase resources, as Google crawls pages by downloading resources and then processing them. Off-page SEO in this case is one of the solutions, the more internal and external links (URL’s) a site has, the better. You can also fix broken URL links, do redirects, and use the indexing API.
Log File is an output of files contained on a web server that records any request received by the server. These help technical SEO professionals better understand how sites are crawled. They are also one of the only ways to see the actual behavior of Googleboot on websites, provide useful data and valuable information for optimization and data-driven decisions. Log files are important because they contain information that is not available anywhere else. Log File records are collected and maintained by the website’s web server for a certain period of time.
A Log File typically looks like this:
27.300.14.1 – – [14/Sep/2017:17:10:07 -0400] “GET https://allthedogs.com/dog1/ HTTP/1.1” 200 “https://allthedogs.com” “Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)”
The access method depends on the hosting solution, in most cases, to analyze the log files, you will first need to request access to the files from a developer, or through the CDN (web servers like Apache, NGINX, IIS), for example you can access Logflare, a Cloudflare application where log files are stored in a BigQuery database, or you can access Sucuri, Kinsta CDN, Netlify CDN and Amazon Cloudfront.
Get and Post are requests sent to the server. The GET request is used to request HTML files, listings of all registered products or forms, these are requests with a maximum submission data length of up to 255 characters. These forms can be crawled by Google. POST, on the other hand, is a method of transferring data to a server, which allows you to send slightly larger information, information to be processed, such as images, a client, etc. Unlike the GET method, Google cannot track POST forms, in most cases.
The availability or unavailability of a site is very important to be checked, if a user comes across an error page where he can’t access the site or can’t find it at all, he will not come back looking for it anymore. When this error happens and Google can’t access the site, the pages are not indexed, and if it happens regularly, Google understands that the site no longer exists and removes it from the search engines. Usually these problems are related to a site’s hosting services, so you should make sure you talk to the IT team and let them know about these problems as soon as possible.
Another unavailability problem is due to the 404 error, page not found, the user cannot see the content of the page. Google usually penalizes these pages, so you should be aware and correct the URL of this page, 301 redirection is usually applied on this link. You can also choose to create a custom error page, which redirects the user to a “nice” layout, dynamic and with other links described, so the user will not leave the site, but will get to other pages of this, it can be advantageous this process of baclinks within the site itself. You can check these errors through Google Search Console or Dead Link Checker.
In this Part 1 article of Practical Technical SEO we cover indexing techniques and some crawling topics as well as the beginning of a website review, the audit. In the Technical SEO Part 2 article we will talk about the remaining tracking techniques such as servers/CDN’s, Browsers, Robots Directives, Redirects, HTTP Status Code and Canonization.
Today, many companies need immediate results, but the truth is that they can’t afford to implement SEO internally while leveraging with the priority of their business focus. If you still can’t handle these steps or don’t have the time to put them in place, Bringlink SEO ensures you get the brand visibility and growth you deserve.
Talk to us, send email to firstname.lastname@example.org.
Otimifica – https://www.otimifica.com.br/seo-tecnico/
Link Nacional – https://www.linknacional.com.br/
Search Engine Journal – https://www.searchenginejournal.com/
Migration is one of the most challenging tasks for any SEO and can be done in a variety of ways,
Building an online brand in 2023 will not be a single key to success but a combination of strategies. With