• 2 Posts
  • 101 Comments
Joined 1 year ago
cake
Cake day: October 4th, 2023

help-circle
  • wordfreq is not just concerned with formal printed words. It collected more conversational language usage from two sources in particular: Twitter and Reddit.

    Now Twitter is gone anyway, its public APIs have shut down,

    Reddit also stopped providing public data archives, and now they sell their archives at a price that only OpenAI will pay.

    There’s still the Fediverse.

    I mean, that doesn’t solve the LLM pollution problem, but…







  • locking down the Windows kernel in order to prevent similar issues from arising in the future. Now, according to a Microsoft blog post about the recent Windows Endpoint Security Ecosystem Summit, the company is committing to providing “more security capabilities to solution providers outside of kernel mode.”

    So first off, from a purely-technical standpoint, I think that that makes a lot of sense for Microsoft. Jamming all sorts of anti-cheat stuff into the Windows kernel is a great way to create security and stability problems for Windows users.

    However.

    I don’t know if my immediate take would be that it would permit improving Linux compatibility.

    So, from a purely-technical standpoint, sure. Having out-of-kernel anti-cheat systems could make it easier to permit for Linux compatibility.

    But it also doesn’t have to do so.

    First, Microsoft may very well patent aspects of this system, and in fact, probably has some good reasons to do so. A patent-encumbered anti-cheat system solves their problem. But that doesn’t mean that it’s possible for other platforms to go out and implement it, not for another 20 years, at least.

    Second, it may very well rely on trusted hardware, which may create issues for Linux. The fundamental premise of a traditional open-source Linux system is that anyone can run whatever they want and modify the software. That does not work well with anti-cheat systems, which require not letting users modify their local software in ways that are problematic for other users. My Linux systems don’t have ties up and down the software stack to trusted hardware. Microsoft is probably fine with doing that, on both XBox and newer trusted-hardware-enabled Windows systems.




  • Basically every screenshot of the “lost” TUIs look like a normal emacs/vim session for anyone who has learned about splits and :term (guess which god I believe in?). And people still use those near constantly. Hell, my workflow is generally a mix between vim and vscode depending upon what machine and operation I am working on. And that is a very normal workflow.

    I use emacs, and kind of had the same gut reaction, but they do address it and have a valid point in that the IDEs they’re talking about are “out of box” set up and require little learning to use in that mode.

    Like, you can use emacs and I’m sure vim as an IDE, but what you have is more a toolkit of parts for putting together your own IDE. That can be really nice, more flexible, but it’s also true that it isn’t an off-the-shelf, low-effort-to-pick-up solution.

    Emacs had some “premade IDE” project I recall that I tried and wasn’t that enthusiastic about.

    I don’t know vim enough to know what all the parts are. Nerdtree for file browsing? I dunno.

    With emacs, I use magit as a git frontend, a compilation buffer to jump to errors, projectile to know the project build command and auto-identify the build system used for a given project and search through project files, dired to browse the files, etags and some language server – think things have changed recently, but I haven’t been coding recently – to jump around the codebase. I have color syntax highlighting set up. I use .dir-locals.el to store per-project settings like that build command used by projectile. The gdb frontend to traverse code associated with lines in a stack trace on a running program. TRAMP to edit files on remote machines.

    But that stuff isn’t generally set up or obvious out of box. It takes time to learn.

    EDIT: The “premade IDE” I was thinking of for emacs is eide:

    https://software.hjuvi.fr.eu.org/eide/



  • To clarify: I meant how do I do it via API calls,

    If you mean at the X11 call level, I think that it’s a window hint, assuming that you’re talking about a borderless fullscreen window, and not true fullscreen (like, DGA or DGA2 or something, in which case you don’t have a fullscreen X11 window, but rather direct access to video memory).

    https://specifications.freedesktop.org/wm-spec/latest/ar01s05.html

    See _NET_WM_STATE_FULLSCREEN, ATOM

    If you’re using a widget toolkit like gtk or something and writing the program, it’ll probably have some higher-level fullscreen toggle function that’ll flip that on X11. Ditto for SDL.

    If you mean in a script or something, I’d maybe try looking at xprop(1) to set that hint.

    I’d also add, on the “user” front, that I don’t use F11 and I think that that every window manager or desktop environment that I’ve ever used provides some way to set a user-specified keystroke to toggle a window’s fullscreen state. I’ve set Windows-Enter to do that for decades, on every environment I’ve used.




  • Internet Archive creates digital copies of print books and posts those copies on its website where users may access them in full, for free, in a service it calls the “Free Digital Library.” Other than a period in 2020, Internet Archive has maintained a one-to-one owned-to-loaned ratio for its digital books: Initially, it allowed only as many concurrent “checkouts” of a digital book as it has physical copies in its possession. Subsequently, Internet Archive expanded its Free Digital Library to include other libraries, thereby counting the number of physical copies of a book possessed by those libraries toward the total number of digital copies it makes available at any given time.

    This appeal presents the following question: Is it “fair use” for a nonprofit organization to scan copyright-protected print books in their entirety, and distribute those digital copies online, in full, for free, subject to a one-to-one owned-to-loaned ratio between its print copies and the digital copies it makes available at any given time, all without authorization from the copyright-holding publishers or authors? Applying the relevant provisions of the Copyright Act as well as binding Supreme Court and Second Circuit precedent, we conclude the answer is no. We therefore AFFIRM.

    Basically, there isn’t an intrinsic right under US fair use doctrine to take a print book, scan it, and then lend digital copies of the print book.

    My impression, from what little I’ve read in the past on this, is that this was probably going to be the expected outcome.

    And while I haven’t closely-monitored the case, and there are probably precedent issues that are interesting for various parties, my gut reaction is that I kind of wish that archive.org weren’t doing these fights. The problem I have is that they’re basically an indispensible, one-of-a-kind resource for recording the state of webpages at some point in time via their Wayback Machine service. They are pretty widely used as the way to cite a page on the Web.

    What I worry about is that they’re going to get into some huge fight over copyright on some not-directly-related issue, like print books or something, and then someone is going to sue them and get a ton of damages and it’s going to wipe out that other, critical aspect of their operations…like, some random publisher will get ownership of archive.org and all of their data and logs and services and whatnot.





  • Oh, and one other factor. I was just reading a paper on British housing policy. I’m not taken with the format – it’s imagining a world where planning restrictions on building new housing were reduced, and talking about the benefits of it – but it does also make a number of good points, including the point that some of it is that the UK hasn’t been building housing at the kind of rate that would probably be ideal for some time. Since newer buildings are better-insulated, that also means that the present stock of buildings tend to be less-well-insulated than would be the case had more construction occurred:

    https://iea.org.uk/wp-content/uploads/2024/03/IEA-Discussion-Paper-123_Home-Win_web.pdf

    Although this was not initially the motivation, there have been environmental benefits as well. For a long time, Britain used to have poorer energy efficiency standards than most neighbouring countries. It is not that all British homes were energy inefficient. It is just that Britain used to have the oldest housing stock in Europe (European Commission n.d.), and the energy efficiency standard of a dwelling is strongly correlated with its age (ONS 2022). Rejuvenating the housing stock has therefore accidentally driven up its average energy performance.

    This is the “the paper is from a potential future looking back at the imaginary past” format talking here.


  • But with the UK it always comes back to having the worst insulation in the world.

    Most of the UK has relatively-comfortable temperatures, so the impetus to add lots of insulation is relatively low.

    https://en.wikipedia.org/wiki/Climate_of_the_British_Isles

    Temperatures do not often switch between great extremes, with warm summers and mild winters.

    The British Isles undergo very small temperature variations. This is due to its proximity to the Atlantic, which acts as a temperature buffer, warming the Isles in winter and cooling them in summer.

    Over here, in the US, the places with the lowest temperature variations are also islands, like Hawaii. Extreme temperature swings happen in places like the Dakotas, far away from the ocean.

    You’ve been cursed with fairly comfortable temperatures. :-)