When the first five results are the same sentences worded slightly differently like a freshman essay it is not a good sign that I will find a real answer.
The most annoying thing is that almost all tech information has fallen victim to this shit.
We now have to go back to pre-2000’s methods of searching sites, by first identifying sites as reliable and then by relying on the sites own search engines to not suck.
In some cases, this is workable.
In cases where the sites have integrated Google searches, this is even more useless than using Google itself.
Someone should invent a search engine that allows for curated sources. For most things, I’d love to search among the top few thousand sites, and exclude everything else.
Yahoo started out like this. They had humans curating the sites that they searched, and it was pretty good until the web got too big for that to be efficient.
I’ve got exactly that running on my home network for tech stuff.
I’ve thought of opening it up and even been thinking of building a group of people trustworthy to do the curation of sites, but I generally CBA interacting with people that much, I used to be highly active on forums like Madonion/futuremark, [H], etc, but those days are long behind me and these days, I post a bit on Reddit and talk to my wife and that’s about it.
If things proceed to go to shit as much as it has, I may open it up anyway, mostly because maintaining and re-curating sites is a drag on its own.
The amount of sites that were once great tech spots that then got gulped up by the same ol same ol big tech sites to be turned into generic shit, it’s not that they become uncountable, it’s that it’s almost every single one of them.
The best still seems to be simply posting questions on the few OG computer/tech forums that managed to survive.
For hardware and OS, places like ServeTheHome, [H], Anandtech, Techpowerup, etc.
For programming information, it’s so murky I can’t even suggest any specific sites anymore, not even Stack.
Phone/Tablet info, even XDA is getting murky, mostly because a lot of users there only watch the forum for their specific device, so if yours isn’t one that is used by a lot of people, info gets super limited.
I haven’t used kagi, but I believe you can do exactly that with it. You do have to pay for the service, but that’s probably a good thing.
This is a link to the features page. It allows you to permanently ban or boost results from specific domains. But you may need to do some manual effort to make that happen, I don’t really know if there are community-curated backbones or anything for that.
But you can also see if the result is popular, and they seem to work pretty hard to make their platform worth the spend. Everything I’ve heard from people who use it is good.
Someone should invent a search engine that allows for curated sources. For most things, I’d love to search among the top few thousand sites, and exclude everything else.
While I was typing up and fleshing out an idea on curated source lists for search engines, your post beat me to the punch.
As others have said, a curated internet is very old timey, and kind of limited, but I think what I fleshed out could work well with the modern internet, and be interesting. Maybe a major search engine might actually take up the task if user demand is there.
Quality of search results from google have been downward tending for years, and maybe it will boost the quality of results again (albeit with their ads still stuck in the results).
Well, maybe Google can add a catered feature (not by them, that would suck), where by users can publish lists of trusted sites to search, and a user can optionally select a catered list of someone they trust, and Google will only search sites on that list.
Possibly allow multiplexing of lists.
So say I am looking for computer security, I can a catered list for sites “Steve Gibson” trusts, and a list of trustworthy sources “Bleeping Computer” uses, and anything I search for will use both lists as a base for the search.
Maybe it isn’t something people even publish to the search engine; maybe they publish a file on their site that people can point the search engine to, like in Steve Gibson’s case the fictitious file: grc.com/search.sources or create a new file format like .cse (catered search engine), grc.com/index.cse
Maybe allow individual lists to multiplex other lists. Something like this multiplexing two lists added to some additional sites, sub domains, directories, and * all subdomains:
Apparently in the time I put thought into, typed up, changed things, etc, someone else posted a curating idea, so maybe it’s not such a bad idea after all. AI content internet is going to suck.
To expand on the sounding like a horrible idea, it’s mainly because if people rely too much on it, it creates a bubble, and limits the ability to discover new things or ideas outside of that bubble. But if outside of that bubble just sucks or is inaccurate, meh, what are you going to do? Especially if you are researching for something you are working on, could be a paper, a project, maybe something that could have dire financial or safety concerns if you get something wrong, and may need the information to be reliable.
When the first five results are the same sentences worded slightly differently like a freshman essay it is not a good sign that I will find a real answer.
The most annoying thing is that almost all tech information has fallen victim to this shit.
We now have to go back to pre-2000’s methods of searching sites, by first identifying sites as reliable and then by relying on the sites own search engines to not suck.
In some cases, this is workable.
In cases where the sites have integrated Google searches, this is even more useless than using Google itself.
Someone should invent a search engine that allows for curated sources. For most things, I’d love to search among the top few thousand sites, and exclude everything else.
Yahoo started out like this. They had humans curating the sites that they searched, and it was pretty good until the web got too big for that to be efficient.
I’ve got exactly that running on my home network for tech stuff.
I’ve thought of opening it up and even been thinking of building a group of people trustworthy to do the curation of sites, but I generally CBA interacting with people that much, I used to be highly active on forums like Madonion/futuremark, [H], etc, but those days are long behind me and these days, I post a bit on Reddit and talk to my wife and that’s about it.
If things proceed to go to shit as much as it has, I may open it up anyway, mostly because maintaining and re-curating sites is a drag on its own.
The amount of sites that were once great tech spots that then got gulped up by the same ol same ol big tech sites to be turned into generic shit, it’s not that they become uncountable, it’s that it’s almost every single one of them.
The best still seems to be simply posting questions on the few OG computer/tech forums that managed to survive.
For hardware and OS, places like ServeTheHome, [H], Anandtech, Techpowerup, etc.
For programming information, it’s so murky I can’t even suggest any specific sites anymore, not even Stack.
Phone/Tablet info, even XDA is getting murky, mostly because a lot of users there only watch the forum for their specific device, so if yours isn’t one that is used by a lot of people, info gets super limited.
It’s gotten bad out there.
I haven’t used kagi, but I believe you can do exactly that with it. You do have to pay for the service, but that’s probably a good thing.
This is a link to the features page. It allows you to permanently ban or boost results from specific domains. But you may need to do some manual effort to make that happen, I don’t really know if there are community-curated backbones or anything for that.
But you can also see if the result is popular, and they seem to work pretty hard to make their platform worth the spend. Everything I’ve heard from people who use it is good.
https://blog.kagi.com/kagi-features
No need to invent.
That’s how originally search engines, including Google, Yahoo and all the other big ones worked.
You didn’t get indexed by default.
You either got indexed by being submitted or by being referenced often by one or more well represented sites.
It’s only later in the game they started crawling everything.
While I was typing up and fleshing out an idea on curated source lists for search engines, your post beat me to the punch.
As others have said, a curated internet is very old timey, and kind of limited, but I think what I fleshed out could work well with the modern internet, and be interesting. Maybe a major search engine might actually take up the task if user demand is there.
Quality of search results from google have been downward tending for years, and maybe it will boost the quality of results again (albeit with their ads still stuck in the results).
Well, maybe Google can add a catered feature (not by them, that would suck), where by users can publish lists of trusted sites to search, and a user can optionally select a catered list of someone they trust, and Google will only search sites on that list.
Possibly allow multiplexing of lists.
So say I am looking for computer security, I can a catered list for sites “Steve Gibson” trusts, and a list of trustworthy sources “Bleeping Computer” uses, and anything I search for will use both lists as a base for the search.
Maybe it isn’t something people even publish to the search engine; maybe they publish a file on their site that people can point the search engine to, like in Steve Gibson’s case the fictitious file: grc.com/search.sources or create a new file format like .cse (catered search engine), grc.com/index.cse
Maybe allow individual lists to multiplex other lists. Something like this multiplexing two lists added to some additional sites, sub domains, directories, and * all subdomains:
multiplex: grc.com/search.cse
multiplex: bleepingcomputer.com/search.sources
arstechnica.com
*.ycombinator.com
stackoverflow.com
security.samesite.com
linux.samesite.com
differentsite.com/security
differentsite.com/linux
Honestly sounds like a horrible idea, but in a world filled with everything made by AI content, it may become a necessity.
Anyways, I officially put the above idea into the Public Domain. Anyone can use or modify it; feel free Google/Bing.
EDIT: It was posting all fake addresses on the same line, so trying to force them onto separate lines.
Apparently in the time I put thought into, typed up, changed things, etc, someone else posted a curating idea, so maybe it’s not such a bad idea after all. AI content internet is going to suck.
To expand on the sounding like a horrible idea, it’s mainly because if people rely too much on it, it creates a bubble, and limits the ability to discover new things or ideas outside of that bubble. But if outside of that bubble just sucks or is inaccurate, meh, what are you going to do? Especially if you are researching for something you are working on, could be a paper, a project, maybe something that could have dire financial or safety concerns if you get something wrong, and may need the information to be reliable.
Google search with a site filter (e.g linux site:lemmy.ml) will almost always be better than the site’s own search function.