Directives are used by webmasters to tell crawler bots how to access & index their content.
Directives can be on-page or off-page resources, such as XML Sitemaps, Robots.txt files or pieces of code within a given webpage.
Why Directives Matter For SEO & Organic Search
Webmasters need to actively monitor & update their directives as frequently as possible to ensure that they are all saying the same thing.
When a bot is crawling a page to index it, it listens to the directives, by accessing the Robots.txt file to learn the excluded directories it is blocked from, as well as to learn the addresses to all XML & Image Sitemaps.
Any pages that are not allowing bots to index them in those files should contain a noindex tag in their header as well, in order to ensure that there are no mistaken indexations.
Without access to these files, bots will just follow links around your site, without having any established rules about what they are able to access, as well as what they can index.
That can lead to pages that you do not want indexed appearing in search, such as a website backend login page or page with other sensitive information.
Conversely, if they are not all properly aligned, content that is meant to be indexed for search may not be, leading to a loss in traffic, conversions & business.