I often have the opposite experience when looking for technical documentation about programming libraries. For example I will be dealing with a particular bug and will google the library name plus some descriptive terms related to the bug, and I get back general information about the library. In those cases, it seems google often ignores the supplemental information and focuses only on the library name as If I were looking for general information.
What is worse is that the top results are always blog-spam companies that just seem to be copying the documentation pages of whatever language or library I was looking at.
One of my big beefs with ML/AL is that these tools can be used to wrap bad ideas in what I will call “Machine legitimacy”. Which is another way of saying that there are many cases where these models are built up around a bunch of unrealistic assumptions, or trained on data that is not actually generalizable to the applied situation but will still spit out a value. That value becomes the truth because it came from some automated process. People cant critically interrogate it because the bad assumptions are hidden behind automation.