• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: July 8th, 2023

help-circle

  • Personal experience - I used some late version of Plasma 5.2x on desktop and now Plasma 6.x of course (always Wayland, generally always the latest stable version available), and Gnome (always Wayland, always the latest stable version) on my work notebook. I’ve never experienced any “serious” bug on Gnome, but I have experienced multiple on Plasma over that time period. I think the most “serious” bug I’ve had on Gnome was that the cursor was flipped upside down for a while until they fixed that (some time ago). While the most serious bug in KDE were multiple crashes in plasmashell since Plasma 6.x. (Meaning all your open apps got closed, I’d say that’s pretty serious for a bug). Another smaller bug, very recently, was that virtual desktops in KDE Plasma were named wrong and when I renamed them they didn’t get saved so it reverts to the wrong names (e.g. “Desktop 1”, “Desktop 3”, “Desktop 4”, “Desktop 4”). But it seems they fixed that with the latest update as well.

    Which is also why I’d like to keep it that way, Gnome for work and KDE where it’s not super important if plasmashell crashes or does some weird thing every once in a while. I think KDE is more prone to bugs because it’s simply more complex than Gnome. Gnome is quite minimalistic and doesn’t offer lots of features, KDE is a powerhouse desktop with literally tons of features, dwarfing probably every other desktop environment, at least in the available options for which a GUI exists to set them. Also, Gnome doesn’t support many advanced features like HDR (yet), while Plasma does. So the complexity in having all that stuff means Plasma must be more prone to bugs.

    So I view KDE Plasma as “slightly more buggy” than Gnome, still. Especially for dot-zero releases. But the KDE devs are also improving it all the time, so it might become more stable soon. But still, for personal use, KDE Plasma is “stable enough” despite those mentioned bugs, some of which were also fixed in the meantime. For example I didn’t have any more plasmashell crashes since they said that they fixed those causes. Which is why I’m using KDE Plasma 6.x for my personal machines. I like it more than Gnome, but when I want “100%” reliability for a DE, I’m still using Gnome. The main thing I dislike about Gnome isn’t actually its UI or design philosophy or even the limited GUI-based options it offers, but rather its philosophy regarding standards or compliance or making interoperability easier. The Gnome devs often do their own thing and don’t play that nice with others.


  • It depends. It’s viable if you just need a phone with several open source applications (non-Android) and are fine with that. But if you need Android app compatibility it’s probably going to be harder or more inconvenient to do, though I haven’t checked the status in recent time. And then there’s this evil thing called Google Play Integrity (essentially DRM restricting which apps can run on which OS) which is a problem even for non-proprietary Androids, so you probably won’t have any chance if you’re dependent on such an app (thankfully it’s rare but as we all know stupid ideas tend to become annoyingly popular).

    Main problem, as usual, is that Android and iOS have become such big and popular “platforms” for mobile apps that establishing a “third” platform for app developers is basically impossible (also remember what happened to Windows Phone OS, they were late to the market and failed spectacularly to catch up. Of course in this case it’s open source so it can grow regardless of user numbers, but still, it’s hard to catch up when lots of great Android apps were already developed specifically for Android). So you can only hope that Android app compatibility grows mature enough to be close to 100% compatible, so that you can also run almost all Android apps on your mainline Linux mobile OS. Then you’re not “limited” anymore. (At least if you consider it “limited” when you can’t run Android apps. Which most probably consider to be “limited”).

    So I think it’s less about the hardware and OS/UI (I think they work fine these days) and more about the available apps.

    [My main daily driver phone is a GrapheneOS (Android) and I have a Pinephone with Linux for playing around in WiFi at home only]


  • kyub@discuss.tchncs.detoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    22 days ago

    Use Matrix or any good messenger like Signal or Threema for daily communication with friends.

    If you want to see a good table of messenger recommendations, see https://www.messenger-matrix.de/messenger-matrix-en.html

    E-Mail is not a suitable replacement because it lacks end-to-end encryption (unless you and your friends use PGP or S/MIME for that but since that’s rare and slightly too complicated for the common user to use, I’ll just assume that you don’t). While mails are usually encrypted during transport, they lie in plain text format at their destination servers. Depending on which e-mail host you or your friends use, that means the whole content of your e-mail might be scanned and analyzed automatically. Especially if you or your friends use privacy-disrespecting mail hosts like any big commercial one or Gmail or Outlook or what have you. Then your communication via unencrypted mail to or from that person isn’t private.


  • IMHO it’s worth getting into games because they are a mainstream form of entertainment these days (just like movies) and there are incredibly well made games and all sorts of genres, so that everyone can find something. It’s also a fun hobby, at least as long as you play either with friends, or singleplayer, or a multiplayer game with a non-toxic community. Stay away from popular e-sports titles, they’re usually filled with toxic teenagers.

    If you like puzzle games, there are some great ones, for example Portal 1+2 or The Talos Principle 1+2 are probably the most polished ones out there, these are AAA games made by big studios, who don’t usually do puzzle games as they’re somewhat niche but there are some exceptions thankfully. Portal 2 is the highest ranked game of all time on Steam (I think it’s deserved).

    There are also tons of great indie puzzle games out there, of course.

    Somewhat related to puzzle games are “point and click” adventure games. That genre was very popular in the 80s and 90s, now it’s also rather niche, but still some great ones are being developed all the time. Adventure games are (also) about story-telling and solving many puzzles to advance in the game. You usually find lots of items in those and have to combine them in various ways and interact with the game world and its characters to solve puzzles and advance the story. That’s maybe the key difference between those and more focussed puzzle games where it’s more about the puzzles, less about item combinations and character dialogs. But adventures can also contain quite challenging puzzles none the less.

    Genres are hard to distinguish these days because so many games are a blend of different genres. Anyway, you probably want to stay away from games tagged with “action” or “e-sports” and primarily look for “adventure”, “puzzle” or “casual” tags.




  • Don’t use Onedrive, Dropbox or Google Drive (all privacy nightmares). Instead:

    • Self-host https://nextcloud.com/ (this is the gold standard of self-hosting a secure and private cloud storage, you just need your own server with the disk space you need. Open source)
    • P2P and/or self-host https://syncthing.net/ (this will automatically sync files in shared folders between several devices. Best if you have one device which is online all the time. Will use the space on your own devices. Open source)
    • Storage on a trustworthy 3rd party host: https://proton.me/drive (this is the most similar to Onedrive/etc. where you sync your stuff to their servers, so you don’t need to host anything, but contrary to anything from Google/MS/Dropbox, this is at least a reputable and secure/private host which doesn’t abuse or sell your data. Data is encrypted by default. Also open source)

    Furthermore, accessing Onedrive from Linux might be painfully inconvenient because there’s no official proprietary client for it by MS. There are 3rd party clients but I’m not sure how good they are, also MS could at any point change their API or even block unofficial clients, rendering your unofficial client useless at least for a time period.


  • kyub@discuss.tchncs.detoLinux@lemmy.mlSome basic questions about Linux
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    1
    ·
    edit-2
    1 month ago

    I’ll do a (simplified) Windows analogy, if you’re already familiar with Windows.

    Microsoft Windows is closed-source/proprietary, which means only Microsoft has the source code for it, and only Microsoft is legally allowed to create or distribute copies of Windows. “Windows 11” for example is a “distribution” of Windows containing the “Windows NT kernel” (core of the OS) alongside other important software to make the OS usable, like a boot loader, service layer, graphical interface, desktop environment, and lots of included “system” applications like a file explorer, a web browser, apps to adjust settings, apps to display menus and task bars, and so on.

    “Linux” by itself is just the kernel, the core of the OS. Which is by itself not a “usable” operating system yet, just like holding a CPU in your hand doesn’t allow you to use it yet. More components are needed for that. Since Linux is open source and under a permissive license, anyone (even you) can go ahead and create an operating system made with the Linux kernel. If you do that, this is called a distribution or “distro” of Linux. Since there’s not just one company allowed to do that, many distributions exist. They all made their own operating system on top of the Linux kernel. Even though hundreds of distros exist, only a handful of them are actually popular, stable, secure and recommended for general use. They all use similar, but sometimes different software to include in the distribution. Like the Linux kernel, most of that software is open source so it can also be modified or extended.

    Since “Linux distribution” is rather long to write, people often just write “Linux” but mean the whole distribution, not just the kernel. These are just common inaccuracies in communication, but what the person meant should be obvious from the context.

    Common and recommendable Linux distributions (= full, usable operating systems) include: Linux Mint, Ubuntu, Fedora, OpenSuSE, Arch, Debian. These are full operating systems and they all include the Linux kernel at their core. Of course, the similarities go further than that. Most distros are similar enough that if you’ve learned one, you can also use any other with little additional things to learn. However, some distros are deliberately a bit more different or tailored to more specific users or use-cases, for example Arch targets more experienced Linux users because it’s a very minimalistic distro, it expects the user to know which packages he wants to install. It pre-installs almost nothing. You can think of this like “Windows Server Core” where it just boots into a minimalistic terminal by default, no usable GUI yet, but you can of course install the desktop environment and everything if you need it and make a full-featured desktop out of it. The distro just doesn’t want to preinstall anything which you later might not like, which is why it gives you the choice, but that makes it a minimalistic distro and it’s harder for beginners to use that way. Other distros like Mint are much more similar to the client editions of MS Windows in that they preinstall everything the user needs for a desktop OS and more, so that the user can boot into and use the desktop as quickly and easily as possible.

    And then there are even more special-purpose distributions like Kali Linux which includes things like penetration testing tools (i.e. “hacker tools”), which makes it a distribution for IT security people, so they can boot into it and have access to most needed tools right away without installing much else (also good on a bootable USB stick). But usually, in general threads like this one, people don’t talk about specific-use distros, but about generalist distros which you can install and use as a regular desktop OS.

    Desktop environments also exist on Windows but there’s basically only one, made by Microsoft. In the Linux world there are several to choose from. The most common ones are: KDE Plasma, Gnome, Cinnamon, XFCE. These desktop environments contain window managers or compositors, task bars or panels, menus, various tools like file managers, process viewers and text editors, and various background programs. This is all needed for the user to have what is commonly known as “a desktop environment”, because if you didn’t have one, you’d be basically staring at a screen containing at most a cursor and a wallpaper, with no way for you to interact with anything. Of course, these can look and feel different from each other (just like Windows looks and feels different than MacOS), and they have different features and strengths and weaknesses, but their goal is always the same. And as usual in the open source world, there’s not just one project but multiple, and out of those multiple a couple are popular, viable and stable enough so that they are usually included in most Linux distributions. Which is why most distros also give the user the choice to have a specific variant of the distribution with a specific desktop preinstalled. For example, Ubuntu also has Kubuntu (= Ubuntu with preinstalled KDE Plasma) or Xubuntu (= Ubuntu with preinstalled XFCE). These can have various names but in the end it’s just the base distribution (“Ubuntu”) with a different preinstalled “face” so to say (and you can change those faces or desktops from within the same distro, of course). Most other things are exactly the same between those distribution variants.

    As a new user, you don’t need to learn about everything. Just pick an easy to use generalist desktop distro like Linux Mint and use the default desktop environment or variant which they provide or recommend by default. You can start experimenting with more choices later on if you want, but you also don’t need to. If you have something you’re comfortable using, then you can just stick with that.


  • I get that it’s a nice daydream to think of open source projects as existing in some kind of independent, ethereal vacuum just because the code is out there and accessible from any place on Earth. But every software project is (mostly?) dependent on the jurisdiction in one country, in this case it’s the US, and so their laws about sanctions and so on apply. And yes, this means that unless conflicts/wars between nations happen to cease, that we will eventually have completely separated blocks of politics/culture/military and also IT. Globalization is over. China will have their own stuff, Russia will have their own stuff, and US+EU will have their own stuff. And none of those countries should continue using high-tech products made by the other because they could be sabotaged and it might be hard to find, so it’s best to not use them at all and just cook your own stuff. It’s unfortunate, but bound to happen in the current state of the political world.


    • Pomodoro timers (hit a keybinding, a 25min timer will start. Within that time, do something productive. After that time, you can do a 5min “break”. Then probably start the next timer. You can also adjust the timings of course)
    • Treat the thing you want to do instead of your task as the thing you can do as the reward after having done the task first (kind of a gamification mechanism maybe)
    • Develop a habit of always doing something productive (from your backlog) each day, unless you’re sick or so
    • If the task seems so big or hard that you don’t even start, split it in parts. You rarely have to do everything at once. Splitting it into parts also allows you to not over-exert yourself, so you’ll have more time for the things you’d rather want to do afterwards

  • kyub@discuss.tchncs.detoLinux@lemmy.mlLinux and your family
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 months ago

    Experience with relatives who had no prior experience with Windows or Linux: installing Linux for them was great, painless and also facilitates troubleshooting for me. No problems here. Mostly using Linux Mint for those purposes, it’s a great distro for non-techy people.

    Experience with relatives with prior Windows experience (but no Linux experience): a mixed bag. Some use Linux happily now (thankfully), some returned to Windows because they couldn’t change their habits or have weird specific incompatibility issues with niche hardware which they also don’t want to solve in a different way. I’ve kind of stopped giving support to those, since I don’t want to give Windows support in my free time. I sometimes have to do it work-related, that’s more than enough Windows contact for me. I also refuse to give buying advice on any products by Microsoft, Apple, Meta, Amazon or Google, with only very few exceptions (e.g. Pixel phones, because they’re very secure and with GrapheneOS installed they’re the best general mobile phone option). It’s a bit of an ethical dilemma because I’d like to help the people but also don’t want to directly or indirectly support those companies. I always offer them help if they use Linux or the things I recommend.


  • Noroi - The Curse (2005, Japan) Supernatural first-person video documentary style POV, but with higher image quality than Blair Witch Project for example. No jump scares, just very creepy and unsettling. Slow burn, but good pacing IMHO. No weaknesses IMHO, hence on top of my list. Just a very unsettling and disturbing, almost real-feeling, horror movie.

    Also good:

    • A Tale of Two Sisters (2003, South Korea): less horror, more artistic, intelligent and original. Great story
    • Shutter (2004, Thailand): my favorite jump-scare horror with cool effects
    • Incantation (2022, Taiwan): great supernatural slow-burn horror with a cool twist
    • Hereditary (2018, USA): great supernatural slow-burn horror, original as well
    • Sinister (2012, USA/UK/CAN): great supernatural horror
    • Event Horizon (1997, USA/UK/CAN): great sci-fi horror, very unsettling
    • REC (2007, Spain): one of the best zombie style movies and also one of the most horror-like ones
    • It Follows (2014, USA): kind of a stupid plot but it works. It’s original, well executed and unsettling (supernatural)
    • Smile (2022, USA): an even more stupid plot, but also well executed. The ending is bad. But it still terrified me so it works at its core, and that’s all that horror films need to do (supernatural)
    • As Above, So Below (2014, USA/France): the weakest one on this list but it’s very original as well, I like it because of that

  • kyub@discuss.tchncs.detoMemes@lemmy.mlAlready feels like this sometimes
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    2 months ago

    Winter is on its way out due to climate change. In around the year 2100, it’s estimated that there will only be 3 seasons left, no winter. And summer will be much longer and much hotter. So the 3 seasons will be spring, then a 2-season long summer basically, then fall. That’s it.

    But you can already see the disappearance of winter today because there’s much less snow and it’s much warmer than like 30 years ago. (Speaking for Germany)





  • kyub@discuss.tchncs.detoLinux@lemmy.mlIs Linux As Good As We Think It Is?
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    2
    ·
    3 months ago

    Windows will continue to get more and more user-hostile as time goes on, and they want everyone to have a subscription to Microsoft’s cloud services, so they can be in total control of what they deliver to the user and how the user is using their services/apps, and they also will be able to increase pricing regularly of course once the users are dependent enough (“got all my work-related data there, can’t just leave”).

    The next big step that will follow after the whole M365 and Azure will be that businesses can only deploy their Windows clients by using MS Intune, which means MS will deploy your organization’s Windows clients, not your organization. So they’re always shifting more and more control away from you and into MS’ hands. Privacy is always an obvious issue, at the very least since Nadella is CEO, but unfortunately the privacy-conscious people have kind of lost that war, because the common user (private AND business sector) doesn’t care at all, so we will have to wait and see how those things will turn out in the future, they will start caring once they are being billed more due to their openly known behavior (driving, health, eating/drinking, psychology, …) or once they are being legally threatened more (e.g. your vehicle automatically reports by itself when you’ve driven too fast, or some AI has concluded based on your gathered data that you’re likely to cause some kind of problem), or once they are rejected at or before job interviews because of leaked health data or just some (maybe wrong) AI-created prognosis of your health. So I think there will be a point when the common user will start caring, we just haven’t reached that point yet because while current data collection and profile building is problematic because it’s the stepping stone to more dystopian follow-ups, it alone is still too abstract of an issue for most people to care about it. Media is also partly to blame here when they do reviews or news about new devices and then just go like “great camera and display, MUST BUY” and never mention the absurd amount of telemetry data the device sends home. MS is also partnering with Palantir and OpenAI which will probably give them even more opportunities to automatically surveil every single one of their business and private sector users. I think M365 also already gives good analytics tools to business owners to monitor what their employees are doing, how much time they spend in each application, how “efficient” they are, things like that. Plus they have this whole person and object recognition stuff going on using “smart” cameras and some Azure service which analyzes the video material constantly. Where the employees (mostly workers in that case) are constantly surveilled and if anything abnormal happens then an automatic alert is sent, and things like that. Probably a lot of businesses will love that, and no one cares enough about the common worker’s rights. It can be sold as a security plus so it will be sold. So I think MS is heavily going into the direction of employee surveillance, since they are well-integrated into the business world anyway (especially small and medium businesses) and with Windows in particular I think they will move everything sloooowly into the cloud, maybe in 10-15 years you won’t have a “personal” computer anymore, you’re using Microsoft’s hardware and software directly from Microsoft’s servers and they will gain full, unlimited, 100% surveillance and control of every little detail you’re doing on your computer, because once you hand away that control, they can do literally anything behind your back and also never tell you about it. Most of the surveillance stuff going on all the time already is heavily shrouded in secrecy and as long as that’s the case there will be no justice system in the world being able to save you from it, because they’d first need concrete evidence. Guess why the western law enforcement and secret services hunted Snowden and Assange so heavily? Because they shone some light into what is otherwise a massive, constant cover-up that is also probably highly illegal in most countries. So it needs to be kept a secret. So the MS (and Apple, …) route stands for total dependence and total loss of control. They just have to move slowly enough for the common user not to notice. Boil the frog slowly. Make sure businesses can adapt. Make sure commercial software vendors can adapt. Then slowly direct the train into cloud-only territory where MS rules over and can log everything you do on the computer.

    Linux, on the other hand, stands for independence. It means you can pick and choose what components you want, run them whereever and however you want, build your own cloud, and so on. You can build your own distro or find one that fits your use case the most. You’re in a lot of control as the user or administrator and this will not change considering the nature of open source / free software. If the project turns to sh!t, you’re not forced to stick with it. You can fork it, develop an alternative. Or wait until someone else does. Or just write a patch that fixes the problematic behavior. This alone makes open source / free software inherently better than closed source where the users have no control over the project and always have to either use it as it is or stop using it altogether. There’s no middle ground, no fixes possible, no alternatives that can be made from the same code base because the code base is the developer’s secret. Also, open source software can be audited at will all the time. That alone makes it much more trustworthy. On the basis of trustworthiness and security alone, you should only use open source software. Linux on its own is “just” the kernel but it’s a very good kernel powering a ton of highly diverse array of systems out there, from embedded to supercomputer. I think the Linux kernel can’t be beaten and will become (or is already) the objective best operating system kernel there is out there. Now, as a desktop user, you don’t care that much about the kernel you just expect it to work in the background, and it does. What you care more is UI/UX, consistency and application/game compatibility. We can say the Linux desktop ecosystem is still lacking in that regard, always behind super polished and user-friendly coherent UIs coming from especially Apple in that regard (maybe also a little bit by Microsoft but coherent and beautiful UIs aren’t Microsoft’s strong point either, I think that crown goes to Apple). That said, Apple is very much alike Microsoft in that they have a fully locked-down ecosystem, so it’s similar to MS, maybe slightly less bad smelling still but it will probably also go in the same direction as MS does, just more slowly and with details being different. Apple’s products also appeal to a different kind of audience and businesses than MS’ products do. Apple is kind of smart in their marketing and general behavior that they always manage to kind of fly under the radar and dodge most of the shitstorms. Like they also violate the privacy of their users, but they do it slightly less than MS or Google do, so they’re less of a target and they even use that to claim they’re the privacy guys (in comparison), but they also aren’t. You still shouldn’t use Apple products/services. “Less bad than utterly terrible” doesn’t equal “good”. There’s a lot of room between that. Still, back to Linux. It’s also obviously a matter of quality code/projects and resources. Big projects like the Linux kernel itself or the major desktop environments or super important components like systemd or Mesa are well funded, have quality developers behind them and produce high quality output. Then you also have a lot of applications and components where just single community developers, not well funded at all, are hacking away in their free time, often delivering something usable but maybe less polished or less userfriendly or less good looking or maybe slightly more annoying to use but overall usable. Those applications/projects could use some help. Especially if they matter a lot on the desktop because there’s little to no alternative available. On the server side, Linux is well established, software for that scenario is plentiful and powerful. Compared to the desktop, it’s no wonder why it’s successful on servers. Yes, having corporations fund developers and in turn open source projects is important and the more that do it, the more successful those projects become. It’s no wonder that gaming for example took off so hugely after Valve poured resources and developers into every component related to it. Without that big push, it would have happened very slowly, if at all. So even the biggest corpo haters have to acknowledge that in capitalism, things can move very fast if enough money is being thrown at the problem, and very slowly if it isn’t. But the great thing about the Linux ecosystem is that almost everything is open source, so when you fund open source projects, you accelerate their growth and quality but these projects still can’t screw you over as a user, because once they do that, they can be forked and fixed. Proprietary closed-source software can always screw over the user, no one can prevent that, and it also has a tendency to do just that. In the open source software world, there are very few black sheep with anti-user features, invasive telemetry, things like that. In the corporate software world, it’s often the other way around.

    So by using Linux and (mostly) open source products, you as the user/admin remain in control, and it’s rare that you get screwed over. If you use proprietary software from big tech (doesn’t even matter which country) you lose control over your computing, it’s highly likely that you get screwed over in various ways (with much more to come in the future) and you’re also trusting those companies by running their software and they’re not even showing the world what they put in their software.



  • kyub@discuss.tchncs.detoLinux@lemmy.mlWhat is the /opt directory?
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    9 months ago

    Let’s say you want to compile and install a program for yourself from its source code form. There’s generally a lot of choice here:

    You could (theoretically) use / as its installation prefix, meaning its binaries would then probably go underneath /bin, its libraries underneath /lib, its asset files underneath /share, and so on. But that would be terrible because it would go against all conventions. Conventions (FHS etc.) state that the more “important” a program is, the closer it should be to the root of the filesystem (“/”). Meaning, /bin would be reserved for core system utilities, not any graphical end user applications.

    You could also use /usr as installation prefix, in which case it would go into /usr/bin, /usr/lib, /usr/share, etc… but that’s also a terrible idea, because your package manager respectively the package maintainers of the packages you install from your distribution use that as their installation prefix. Everything underneath /usr (except /usr/local) is under the “administration” of your distro’s packages and package manager and so you should never put other stuff there.

    /usr/local is the exception. It’s where it’s safe to put any other stuff. Then there’s also /opt. Both are similar. Underneath /usr/local, a program would be traditionally split up based on file type - binaries would go into /usr/local/bin, etc. - everything’s split up. But as long as you made a package out of the installation, your package manager knows what files belong to this program, so not a big deal. It would be a big deal if you installed it without a package manager though - then you’d probably be unable to find any of the installed files when you want to remove them. /opt is different in that regard - here, everything is underneath /opt//, so all files belonging to a program can easily be found. As a downside, you’d always have to add /opt// to your $PATH if you want to run the program’s executable directly from the commandline. So /opt behaves similar to C:\Program Files\ on Windows. The other locations are meant to be more Unix-style and split up each program’s files. But everything in the filesystem is a convention, not a hard and fast rule, you could always change everything. But it’s not recommended.

    Another option altogether is to just install it on a per-user basis into your $HOME somewhere, probably underneath ~/.local/ as an installation prefix. Then you’d have binaries in ~/.local/bin/ (which is also where I place any self-writtten scripts and small single scripts/executables), etc. Using a hidden directory like .local also means you won’t clutter your home directory visually so much. Also, ~/.local/share, ~/.local/state and so on are already defined by the XDG FreeDesktop standards anyway, so using ~/.local is a great idea for installing stuff for your user only.

    Hope that helps clear up some confusion. But it’s still confusing overall because the FHS is a historically grown standard and the Unix filesystem tree isn’t really 100% rational or well-thought out. It’s a historically grown thing. Modern Linux applications and packaging strategies do mitigate some of its problems and try to make things more consistent (e.g. by symlinking /bin to /usr/bin and so on), but there are still several issues left over. And then you have 3rd party applications installed via standalone scripts doing what they want anyway. It’s a bit messy but if you follow some basic conventions and sane advice then it’s only slightly messy. Always try to find and prefer packages built for your distribution for installing new software, or distro-independent packages like Flatpaks. Only as a last resort you should run “installer scripts” which do random things without your package manager knowing about anything they install. Such installer scripts are the usual reason why things become messy or even break. And if you build software yourself, always try to create a package out of it for your distribution, and then install that package using your package manager, so that your package manager knows about it and you can easily remove or update it later.