What distro and version of that distro are you using? Did you install gpg from the repository or elsewhere? What version of gpg are you running?
What distro and version of that distro are you using? Did you install gpg from the repository or elsewhere? What version of gpg are you running?
The OOM killer is particularly bad with ZFS since the kernel doesn’t by default (at least on Ubuntu 22.04 and Debian 12 where I use it) see the ZFS as cache and so thinks its out of memory when really ZFS just needs to free up some of its cache, which happens after the OOM killer has already killed my most important VM. So I’m left running swap to avoid the OOM killer going around causing chaos.
Mentoning Iceweasel in 2024?! Where did you find this meme?! Debian stable?!
I have really mixed feelings about this. My stance is that I don’t you should need permission to train on somebody else’s work since that is far too restrictive on what people can do with the music (or anything else) they paid for. This assumes it was obtained fairly: buying the tracks of iTunes or similar and not torrenting them or dumping the library from a streaming service. Of course, this can change if a song it taken down from stores (you can’t buy it) or the price is so high that a normal person buying a small amount of songs could not afford them (say 50 USD a track). Same goes for non-commercial remixing and distribution. This is why I thinking judging these models and services on output is fairer: as long as you don’t reproduce the work you trained on I think that should be fine. Now this needs some exceptions: producing a summary, parody, heavily-changed version/sample (of these, I think this is the only one that is not protected already despite widespread use in music already).
So putting this all together: the AIs mentioned seem to have re-produced partial copies of some of their training data, but it required fairly tortured prompts (I think some even provided lyrics in the prompt to get there) to do so since there are protections in place to prevent 1:1 reproductions; in my experience Suno rejects requests that involve artist names and one of the examples puts spaces between the letters of “Mariah”. But the AIs did do it. I’m not sure what to do with this. There have been lawsuits over samples and melodies so this is at least even handed Human vs AI wise. I’ve seen some pretty egregious copies of melodies too outside remixed and bootlegs to so these protections aren’t useless. I don’t know if maybe more work can be done to essentially Content ID AI output first to try and reduce this in the future? That said, if you wanted to just avoid paying for a song there are much easier ways to do it than getting a commercial AI service to make a poor quality replica. The lawsuit has some merit in that the AI produced replicas it shouldn’t have, but much of this wreaks of the kind of overreach that drives people to torrents in the first place.
Truly, the year of the Linux desktop!
The snowy mountains are incredible! There’s a series of Shiey videos for B&H where he surfs trains, camps and does a pushbike ride through the mountains: https://redirect.invidious.io/watch?v=V9huXurs678
Good! You wanna automate away a human task, sure! But if your automation screws up you don’t get to hide behind it. You still chose to use the automation in the first place.
Hell, I’ve heard ISPs here work around the rep on the phone overpromising by literally having the rep transfer to an automated system that reads the agreement and then has the customer agree to that with an explicit note saying that everything said before is irrelevant, then once done, transfer back to the rep.
F-11 is confused…. It shot itself in its confusion.
Damn! Using .af for a LGBT+ site is insane! The country could have redirected the domain to their own servers and started learning the personal details of those on the site who I imagine wouldn’t be terribly thrilled having an anti-LGBT+ government learn their personal information (namely information not displayed publicly). Specifically, they could put their own servers in front of the domain so they can decrypt it, then forward the traffic on to the legitimate servers, allowing them to get login information and any other data which the user sends or receives.
morethanfftnchars
I don’t have a problem with training on copyrighted content provided 1) a person could access that content and use it as the basis of their own art and 2) the derived work would also not infringe on copyright. In other words, if the training data is available for a person to learn from and if a person could make the same content an AI would and it be allowed, then AI should be allowed to do the same. AI should not (as an example) be allowed to simply reproduce a bit-for-bit copy of its training data (provided it wasn’t something trivial that would not be protected under copyright anyway). The same is true for a person. Now, this leaves some protections in place such as: if a person made content and released it to a private audience which are not permitted to redistribute it, then an AI would only be allowed to train off it if it obtained that content with permission in the first place, just like a person. Obtaining it through a third party would not be allowed as that third party did not have permission to redistribute. This means that an AI should not be allowed to use work unless it at minimum had licence to view the work. I don’t think you should be able to restrict your work from being used as training data beyond disallowing viewing entirely though.
I’m open to arguments against this though. My general concern is copyright already allows for substantial restrictions on how you use a work that seem unfair, such as Microsoft disallowing the use of Windows Home and Pro on headless machines/as servers.
With all this said, I think we need to be ready to support those who lose their jobs from this. Losing your job should never be a game over scenario (loss of housing, medical, housing loans, potentially car loans provided you didn’t buy something like a mansion or luxury car).
Other comments have hit this, but one reason is simply to be an extra layer. You won’t always know what software is listening for connections. There are obvious ones like web servers, but less obvious ones like Skype. By rejecting all incoming traffic by default and only allowing things explicitly, you avoid the scenario where you leave something listening by accident.
Another option may be to use Windows Server 2022 Eval. You may run in to problems with software refusing to run on a server though. The initial eval lasts 180 days, but you can run a command to extend that 5 times (don’t quote me on the exact number) which will give you an updated system for years to come.
It is a checkbox in Rufus. Ventoy will also do this for you.
PatchLess