The way I, as another European, understand this, he’s flying an anti-oppression flag and a pro-oppression flag at the same time.
The way I, as another European, understand this, he’s flying an anti-oppression flag and a pro-oppression flag at the same time.
Several years ago now. On at least two of those tries, after maybe a month or some of daily driving, suddenly the fs goes totally unresponsive and because it’s the entire system, could only reboot. FS is corrupted and won’t recover. There is no fsck. There is no recovery. Total data loss.
Could you narrow it down to just how long ago? BTRFS took a very long time to stabilise, so that could possibly make a difference here. Also, do you remember if you were using any special features, especially RAID, and if RAID, which level?
Out of interest, since I’ve not used the “recommended partion setup” for any install for a while now, is ext4 still the default on most distros?
I recently installed Nobara Linux on an additional drive, because after 20 years, I wanted to give Linux gaming another shot (works a lot better than I had hopes for, btw), and it defaulted to btrfs. I’ll assume so does Fedora, because I cannot imagine Nobara changed that part over the Fedora base for gaming purposes.
This is on Ubuntu 22.04.3 LTS, so well within the 5 year window. I’m complaining because I kept getting frantic calls from people using that who didn’t know what was going on.
That would be all absolutely fine and dandy if I could easily just opt out in a way that makes the system stop bothering me about it. But I can’t.
Pro tier is for Enterprise customers who need extra-long term support and are willing to pay for it. Canonical is meeting a market demand so they can remain competitive for use in those environments, which is good for everyone. It’s benign
Then please show me the button (and I mean button, not command-line exclusive settings or config file entries in /etc, and certainly not unofficial trickery like third party repositories that replace Ubuntu advantage packages with an empty decoy) that says “Thank you, I don’t need Ubuntu Pro, please stop nagging me about it”.
Depends a lot on what kind of user. I specified “non-technical” with a reason. I have, in the past, recommended Ubuntu to a small number of friends and family members. These are people who aren’t particularly comfortable using computers in the best of times. They very much don’t need the newest, best and most shiny versions of everything. They need to do billing, taxes, correspondance, email and various other tasks related to their small business, they need that to work reliably, and if at all possible, to work exactly the same way as it did the last five years. And if there is any pop-up they don’t immediately understand (for example because it’s in English instead of their native language, yes that still happens in Ubuntu quite a bit), they will call me on the phone.
I don’t know if you’ve ever had to support non-technical end-users, but for some of them, even something as seemingly trivial as a menubar that has moved from the top to the side can be issue that needs explaining and training. For that kind of user, I really do want to postpone all updates beyond pure bug and security fixes for as long as reasonably possible. Five years sounds reasonable. Six months does not.
And constant non-optional pop-ups nagging you to upgrade to Ubuntu Pro during those five years. I’d actually be kinda okay with it if it were only after, an if just as a reminder that, hey, the LTS period is over, you need to switch to the next LTS release now.
With the LTS versions being the best and obvious choice for your average non-technical user who just wants to get some work done…
They do, including those that are in Debian, but they also have an additional source of faster security updates developed in house, which they hold back from the free path in favor of the pro package.
Personally, I feel a bit torn about this. On the one hand, this should be, officially at least, purely an additional service on top of what’s available in the baseline distro, and isn’t taking anything away from that.
On the other hand, I strongly disagree with holding back security fixes from anyone, ever, for any reason. Also, the claim that it will never take away anything from the free base distro is at least a little bit suspect. I would not be surprised if the existence of the pro path were to gradually erode the quality and timelyness of the base security upgrade path over time. Also, Ubuntu is now very annoying about nagging you to upgrade to pro, and the way to disable that is fairly involved and very much non-official. The whole thing goes against what I expect from a F/OSS operating system. I don’t quite understand why this topic hasn’t been a much bigger issue in Linux circles yet. It certainly doesn’t sit right with me…
Have you tried it? There is wlrandr, and at least according to how the command line looks, it could be supported.
Those aren’t files, though, they are just some sectors on your block device. Sure, if you mess with those, your ability to decrypt your disk goes out the window, but then, when was bypassing the filesystem and messing with bits on your disk directly ever safe?
It’s possible he was using an encrypted key file instead of just a password for that extra strong security. In that case, of course, if you lose that file, kiss your data good bye.
That ‘amp;’ does not belong in there, it’s probably either a copy-paste error or a Lemmy-error.
What this does (or would do it it were done correctly) is define a function called “:” (the colon symbol) which recursively calls itself twice, piping the output of one instance to the input of the other, then forks the resulting mess to the background. After defining that fork bomb of a function, it is immediately called once.
It’s a very old trick that existed even on some of the ancient Unix systems that predated Linux. I think there’s some way of defending against using cgroups, but I don’t know how from the top of my head.
Thanks for pointing that out, I found the setting on my laptop and tried it out. I do like the jiggle approach better, though, simply because that is something many people (myself included) instinctively do when losing track of the mouse cursor.
If it was, I don’t think it was a default. I had been using Windows 7 for quite a while back in the day, and I cannot remember ever seeing something like this. On the other hand, I can certainly remember losing track of where on my monitors my mouse cursor was on various occasions…
FWIW, this entire comment section:
https://lemmy.world/post/1940961?scrollToComments=true
Back to the to the topic, yes, Linux is not technically Unix by pedigree. In practice, it doesn’t matter that it isn’t and it wouldn’t matter if it were, both for this issue in particular and for most others you are likely to encounter.
The actually relevant technology here is the graphics subsystem, and MacOS’s Cocoa has always been radically different from anything else in the Unix/Linux space. There is no relation whatsoever to either X11 or Wayland. The only thing worth “porting” here is the basic idea. Which is pretty neat, though. Let’s hope Apple hasn’t patented it.
deleted by creator
In F/OSS, it is not unusual for software to stay below 1.0 version for a long time yet still get a lot of use. Just look at how long OpenSSL, for example, was at 0.9.something, while already being of crucial importance to a lot of internet infrastructure.
The reasons for this are varied, but the most important is probably simply that free software developers don’t feel the pressure to call a product 1.0 when they don’t believe it is ready to be called that.
I recognize Kirby, and that’s about it.