Noindex is an instruction for search engine robots to exclude a particular page from search results. The easiest way to include this instruction is in the metatag, in the <head> section of the website. Another option is to include it in the header of the HTTP request.
Noindex can be directed either to all robots visiting the site, or only to particular ones, e.g. googlebot. Supplementing the page code with the information noindex is useful when you do not want a particular subpage to appear in results for a particular query – this may apply to internal search engine pages, for example. This eliminates the problem of duplicate content within the site.
Be aware that if you decide to exclude a page already visible in a search engine, it will only be removed from the results after the next indexation. This process can be accelerated by using a tool such as Google Search Console. Importantly – in such a situation, it is not allowed to exclude a given sub-page using the robots.txt file. If we do this, robots will not be able to get to it, and thus – will not read the added instruction of noindex.
Leave a ReplyWant to join the discussion?
Feel free to contribute!