I don't like posting off-topic in another member's thread.
Can someone in the know detail the differences between trackers and indexers or split me off into another thread?
Crawlers aka indexers aka robots: google (et al.) don't go out loking for you search term(s) in the Internet, they have a pre-built lookup database that exists somewhere on google servers. Crawlers crawl over the entire 'Net in the background and build/update said database.
Then there's "cookies" stored in the browser (i.e. on
your computer) that keep some small amount of information. It could be e.g. an authentication token: if you "login with Facebook", it's the token issued by FB that the browser passes on to SB to let the latter know that the user's FB login is legit.
It could also store the URL of the last page you viewed before coming "here". These let the server (that can access them: there's devil in the details here too) track your path through the website -- or websites. Hence "trackers".
I actually allow all google analytics cookies (using GNU Privacy Badger) because google analytics is a useful tool for us "it" people. It tells you have many visitors you have (capacity planning), most popular pages (where to put more ads if you depend on ad revenue), etc. The path through the site is useful for optimizing layout, providing shortcuts and so on.
Then there's others, Facebook being the Worst Guy du jour, that want to know where else you went and what you saw there, so they can use "AI" to feed you more of the same reinforcing your biases and all that bad stuff.
Now that GDPR is out and its relationship to cookies is questionable, they're back to "pixels" idea: you put a single-pixel image on every page on your site, where nobody will notice. The image is served from a tracking server that gets all the tracking info it wants when your browser fetches the image. No cookies involved, no tracker-blockers interfering. (But if you're one of "the it" people, you can make a
pi-hole for that. Ads too.)