/r/unexpectedfactorial
I’m just a spectre out of the nothingness, surviving inside a biological system.
/r/unexpectedfactorial
Actively criticizing how both USSR and US secretly gave citizenship and hired nazi-fascists (and history is there to be checked) (therefore actively positioning myself against nazi-fascists and against everybody who “sits at a table with nazis and stays at the table”) is “carrying water for nazi-fascists”? WTH?!
Reeducation won’t revive nazi victims. USSR and US were no necromancers.
Indeed, but even hiring and giving citizenship for such people (nazi scientists, military and engineers), especially considering how they got their “knowledge” isn’t an “ok” thing. Neither for US’s Paperclip.
Both US and USSR secretly hired nazi personnel, such as scientists and engineers. Later, both operations were disclosed respectively as Operation Paperclip and Operation Osoaviakhim. USSR didn’t destroy nazi-fascism, they secretly incorporated it (that is, if I correctly understood the reference from the meme, maybe I’m needlessly “ranting”).
What about Operation Osoaviakhim?
I’m a 10+ (cumulative) yr. experience dev. While I never used The GitHub Copilot specifically, I’ve been using LLMs (as well as AI image generators) on a daily basis, mostly for non-dev things, such as analyzing my human-written poetry in order to get insights for my own writing. And I already did the same for codes I wrote, asking for LLMs to “Analyze and comment” my code, for the sake of insights. There were moments when I asked it for code snippets, and almost every code snippet it generated was indeed working or just needing few fixes.
They’ve been becoming good at this, but not enough to really replace my own coding and analysis. Instead, they’re becoming really better for poetry (maybe because their training data is mostly books and poetry works) and sentiment analysis. I use many LLMs simultaneously in order to compare them:
explode
function? “Sorry, can’t comment on texts alluding to dangerous practices such as involving explosives”, I mean, WHAT?!?!)As you see, I tried almost all of them. In summary, while it’s good to have such tools, they should never replace human intelligence… Or, at least, they shouldn’t…
Problem is, dev companies generally focus on “efficiency” over “efficacy”, wishing the shortest deadlines while wishing some perfection. Very understandable demands, but humans are humans, not robots. We need our time to deliver, we need to cautiously walk through all the steps needed to finally deploy something (especially big things), or it’ll become XGH programming (Extreme Go Horse). And machines can’t do that so perfectly, yet. For now, LLM for development is XGH: really fast, but far from coherent about the big picture (be it a platform, a module, a website, etc).
In Brazil, there are regional variations and word/phrasing variations as well.
Formally:
Informally/casually:
There are lots of other variations and I’m not really aware of all of them.
Also, the way I answer depends a lot on multiple factors such as: my emotional state (wrath? Sad? Okay? Excitedly happy (rarely)?), my current pace (rushing? Chilling?), among others. Generally, “Não é aqui não” (the Minas Gerais variation without the ending “moço” and a fully spelled “Não é” instead of “Né”, because I’m originally from interior of São Paulo state but highly culturally influenced by a part of the family from Minas Gerais).
I didn’t test displayport, but IIRC it’s digital, like HDMI is. So, my guess is that Displayport also emanates lots of EM interferences. The very sharp nature of squarey digital waves (in contrast to a sinusoidal wave from analog) decomposes into high-frequency interferences (because a square wave always has high frequency components, as observed through FFT). That’s what causes UHF interference.
It’s not exactly new. It’s known as Van Eck Phreaking. There are open-source projects such as TempestSDR that uses software defined radio dongles (such as RTL-SDR and AirSpy) to reconstruct a remote screen just by listening to its radio interferences.
I once listened to the radio noises emanated from my HDMI connection between my laptop and my LCD screen using a Baofeng UV-5R tuned in UHF frequencies. I could tune it from the street, dozens of meters away from the computer desk. HDMI is very noisy, even noisier than VGA. Even the shortest HDMI cable serves as an antenna for propagating such interference.
I dunno… It slightly remembers me of Terraria soundtracks. Or RuneScape. I played them a long time ago, so I can’t exactly remember.
Have a proper radio ham license. Buy a 40-meter transceiver and a software defined radio dongle. Convert your code into esoteric programming languages such as Whitespace and Brainf, then spell it. “Plus, plus, next, plus, dot, open bracket, next, …”. Transmit your spelling over 40-meter band, while a receiver across the continent is tuned to the frequency. Ask it to repeat and record the QSO. Set the SDR recorder to I/Q packets instead of demodulating AM. Publish it as an audiobook.
“The system can listen to conversations”.
What a timely coincidence! Patent got published basically at the same time Meta’s, Google’s and Microsoft’s “Active Listening” got public as well. 🤔
You didn’t specify which problem or which thing that broke. However (and based on my previous experiences on that matter), one could face a problem regarding package PGP/GPG signatures upon trying to update. This is because archlinux-keyring
is not being updated before the signature checking. That said, a better approach is to always update archlinux-keyring
(sudo pacman -S --needed archlinux-keyring
) before anything else (sudo pacman -Syu
). This way, you guarantee to be up-to-date with developer signatures, needed for pacman to check the validity for every package to be updated/installed. There’s also a pacman-key
command, but I never had to use that.
A mix between the last one and the previous. The key error
should be set in order to indicate the presence of an error, while status
and code
represents, respectively, the error numerical code and the error type, while the error message
is supposed to be concatenated to an human-friendly error text.
They can’t without the given permission from the browser to do so. While they can indeed track the mouse, when they try to access mobile motion sensors (I’m considering a CAPTCHA inside a webpage being accessed through a mobile browser such as Firefox mobile or Chrome for Android), they need to use an HTML5 API that, in turn, will ask the user for permission, something like “This site wants to use sensor motion data. Allow or block?”
Nowadays there are some really annoying CAPTCHAs out there, such as:
In summary, the CAPTCHAs seemingly are becoming less of a “prove you’re not a robot” and more of an forced IQ test. I can see the day when CAPTCHAs will ask you to write down a Laplacian transform for the solution f(x) to the differential equation governing the motion of a mass considering the resistance of air and aerodynamics, or write down a detailed solution to the P versus NP problem.
Although I’m used to write long detailed replies and comments, what possibly saves me from being accused of being some LLM is the fact that English is not my main language, so I often commit concordance/typo mistakes. LLMs are too “linguistically perfect”. In a world where the Dead Internet Theory (i.e. there are few humans left in a bot-filled internet) is more and more real, human mistakes seem to be the only way to really distinguish between a bot and a human.
As a Brazilian, I can unfortunately relate to the exact same scenario…
Isn’t a file browser needed for browsing the saved documents and spreadsheets?
Not to mention that office suites (such as WPS, OpenOffice and LibreOffice) will inevitably pop up a file browser when the “Open” or “Save” buttons/menu items are clicked.