Look on phoronix for benchmarks. Plasma consumes less RAM and CPU than even XFCE.
Look on phoronix for benchmarks. Plasma consumes less RAM and CPU than even XFCE.
Also new people are still motivated to change stuff. They are not yet worn down by bureaucracy.
That is - IMO - what critical thinking is meant to be … thinking about alternative explanations and evaluating their viability or probability.
Unfortunately a lot of people use the term “critical thinking” as just another way to rationalize why they are against something, without actually weighing the options.
Dark humor is like food… not everybody gets it.
Where comments are useful most is in explaining why the implementation is as it is. Otherwise smart ass (your future self) will come along, rewrite it just to realize there was indeed a reason for the former implementation.
So if I put a movement sensor that triggers a light in front of a jewish household, they couldn’t leave on sabbath because their movement would trigger a fire?
One problem is that they need to put a price tag and therefore a timeline on such a project. Due to the complexity and the many unknown unknowns in theses decades worth of accumulated technical debts, no one can properly estimate that. And so these projects never get off and typically die during planning/evaluation when both numbers (cost and time) climb higher and higher the longer people think about it.
IMO a solution would be to do it iteratively with a small team and just finish whenever. Upside: you have people who know the system inside-out at hand all the time should something come up. Downside of course is that you have effectively no meaningful reporting on when this thing is finished.
It only needs to work long enough for the current management to cash in on their savings. Then it’s their successors problem.
To execute more than one process, you need to explicitly bring along some supervisor or use a more compicated entrypoint script that orchestrates this. But most container images have a simple entrypoint pointing to a single binary (or at most running a script to do some filesystem/permission setup and then run a single process).
Containers running multiple processes are possible, but hard to pull off and therefore rarely used.
What you likely think of are the files included in the images. Sure, some images bring more libs and executables along. But they are not started and/or running in the background (unless you explicitly start them as the entrypoint or using for example docker exec
).
The point with an external drive is fine (I did that on my RPi as well), but the point with performance overhead due to containers is incorrect. The processes in the container run directly on the host. You even see the processes in ps
. They are simply confined using cgroups to be isolated to different degrees.
If the application in question doesn’t need to write anything, it also doesn’t write outside of docker, so it also won’t wear down the SD card.
If the app has to write something, a fully read-only container will simply not work (the app will crash or fail otherwise).
Or Battlestar Galactica. Create a new species, make them humanoid, make them sentient, and then treat them like shit. Great.
I can still throw away my fire tv stick then. At the moment it still does the job I bought it for and I won’t produce unnecessary garbage for something that might happen in the future.
You can btw “simply” opt out from this in the settings (look for “featured content” and disable it).
Yes it should be opt-in, but it’s not that hard to keep the fire tv (stick) being a good device for the price paid.
Shut up and take my
moneyprayers
If only GeForceNow was available there. And Kodi.
Hopefully the regulation by the EU fixes this, then I am on board.
From what I understand, Nvidia may be right in this case and explicit sync seems to be the better approach.
There is a nice article on Collabora’s blog about it and it sounds plausible to me: https://www.collabora.com/news-and-blog/blog/2022/06/09/bridging-the-synchronization-gap-on-linux/
Just FYI, if you want to enable and start, you can use systemctl enable --now ...
.
That is essentially how bittorrent works anyway. In Germany people lost in court over this. Also portions of a copyrighted file are a problem. If they can “proof” that they got a relevant portion (more than the typical fair use seconds) you are still on the hook.
All good, but I think it’s really often a misconception that a DE like KDE, which is big and brings tons of features, must be more ressource intensive than a (feature wise) smaller DE. Which, as the benchmarks show, is surprisingly not the case.