One foot planted in “Yeehaw!” the other in “yuppie”.

  • 2 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • I used to subscribe to YouTube premium as of just a few days ago. Even without the ads. There was something very seriously wrong with the suggestion algorithm.

    I was getting cartel violence videos, and dead animal videos. Never watched one before in my life. Yet. YouTube seems to think that I should want to watch this crock of shit. This started coming up about 6 months ago. Until now I’ve been reporting each video as they come up. But that doesn’t seem to help at all.

    At this point I think YouTube is a danger to society - if it’s recommending cartel violence videos to me unsolicited, what are they suggesting to my nieces?

    I have completely nuked it from my life. Almost all of the YouTubers I like are on Nebula or Floatplane so it doesn’t feel like I’m missing much.




  • See: every AAA big game releases lately. Even on Windows, having to nuke your graphics drivers and install a specific version from some random forum is generally accepted as fine like it’s just how PC gaming is.

    Never had to do that since I was ROM hacking an old RX480 for Monero hashrates. In fact, on my Windows 11 partition (Used for HDR gaming which isn’t supported on Linux yet), I haven’t needed to perform a reinstall of the NVIDIA driver even when converting from a QEMU image to a full-fat install.

    When I see those threads, it often comes across as a bunch of gamers just guessing at a potential solution and often become “right” for the “wrong” reasons. Especially when the result is some convoluted combination of installs and uninstalls with “wiping directories and registry keys”.

    But, point taken, the lengths gamers will go to to get an extra 1-2 FPS even if it’s unproven, dangerous, and dumb is almost legendary.




  • I really doubt that. Again - advanced user here - with numerous comparison points to other arch based distros. I also maintain large distributed DB clusters for Fortune 100 companies.

    If it was something not on the latest version - it’s not due to my lack of effort or knowledge, but instead due to the terrible way Garuda is managed.

    What, am I supposed to compile kernel modules from scratch myself? Never needed to do that with Endeavour, Manjaro, or just Arch.

    If Garuda’s install (and subsequent upgrade) doesn’t fetch the latest from the Arch repos, that’s on them.

    EDIT: Also, these non-answers are tiresome, low effort, and provide zero guidance on any matter. I know every single kernel change since 5.0 that impacted my hardware. I have rss feeds for each of the hardware components I have, and if Linux or a distro ships an enhancement to my hardware - I’m usually aware well before it is released. If you were to point to any bit of my hardware I can tell you, for certain, what functionalities are supported, which has bugs, and common workarounds.

    If you want this type of feedback to be valuable, then let me know if a new issue/regression has arisen given the list of hardware I’ve supplied.

    Valuable: “Perhaps it was the latest kernel X which shipped some regressions for Nvidia drivers that causes compositor hitching on KWin”

    Utterly Useless: “It’s very likely some drivers are not up to date or compatible with your system.”





  • “Your application” - the customers you mean. Our DB definitely does it’s own rate limiting and it emits rate limit warnings and errors as well. I didn’t say we advertised infinite IOPs that would be silly. We are totally aware of the scaling factors there and to date IOPs based scaling is rarely a Sev1 because of it. (Oh no p99 breached 8ms. Time to talk to Mr customer about scaling up soon)

    The problem is that the resulting cluster is so performant that you could load in 100x the amount of data and not notice until the disk fills up. And since these are NVME drives on cloud infrastructure, they are $$$.

    So usually what happens is that the customer fills up the disk arrays so fast that we can’t scale the volumes/cluster fast enough to avoid stop-writes let alone get feedback from the customer in time. And now that’s like the primary reason to get paged these days.

    We generally catch gradual disk space increases from normal customer app usage. Those give us hours to respond and our alerts are well tuned. It’s the “Mr. Customer launched a new app and didn’t tell us, and now they’ve filled up the disks in 1 hour flat.” that I’m complaining about.


  • It is definitely an under provisioning problem. But that under provisioning problem is caused by the customers usually being very very stingy about what they are willing to spend. Also, to be clear, it isn’t buckling. It is doing exactly The thing it was designed to do. Which is to stop writes to the DB since there is no disk space left. And before this time, it’s constantly throwing warnings to the end user. Usually these customers tend to ignore those errors until they reach this stop writes state.

    In fact, we just had to give an RCA to the c-suite detailing why we had not scaled a customer when we should have, but we have a paper trail of them refusing the pricing and refusing to engage.

    We get the same errors, and we usually reach out via email to each of these customers to help project where their data is going and scale appropriately. More frequently though, they are adding data at such a fast clip that them not responding for 2 hours would lead them directly into the stop writes status.

    This has led us to guessing what our customers are going to end up at. Oftentimes being completely wrong and eating to scale multiple times.

    Workload spikes are the entire reason why our database technology exists. That’s the main thing we market ourselves as being able to handle (provided you gave the DB enough disk and the workload isn’t sustained for a long enough to fill the discs.)

    There is definitely an automation problem. Unfortunately, this particular line of our managed services will not be able to be automated. We work with special customers, with special requirements, usually fortune 100 companies that have extensive change control processes. Custom security implementations. And sometimes even no access to their environment unless they flip a switch.

    To me it just seems to all go back to management/c-suite trying to sell a fantasy version of our product and setting us up for failure.


  • That is exactly what we do. The problem is that as a managed service offering. It is on us to scale in response to these alerts.

    I think people are misunderstanding my original post. When I say that customer cluster will go into stop writes, that does not mean it is not functional. It is an entirely intended function of the database so that no important data is lost or overwritten.

    The problem is more organizational. It’s that we have a 5 minute SLA to respond to these types of events and that they can happen at any random customer impulse.

    I don’t have a problem with customers that can correctly project their load and let us know in advance. Those are my favorite customers. But they’re not most of our customers.

    As for automation. As I had exhaustedly detailed in another response, we do have another product that does this a lot better. And it’s the one that we are mass marketing a lot more. The one where I’m feeling all the pain is actually our enterprise level managed service offering. Which goes to customers that have “special requirements” and usually mean that they will never get as robust automation as the other product line.


  • Our database is actually pretty graceful. It just goes into stop writes status. You can still read any data and resolving the situation is as easy as scaling the cluster or removing old records. By no means is the database down or inoperable.

    Essentially our database is working as designed. If we rate limited it further then we have less of a product to sell. The main feature we sell of our database technology is its IOPS and resiliency.

    Further, this is just for a specific customer, it has no impact to any other customers or any sort of central orchestration. Generally speaking the stop writes status only ever impacts a single customer and their associated applications.

    Also, customers can be very stingy with the clusters they are willing to buy. We actually are on poor terms of the couple of our customers who just refuse to scale and just expect us to magic their cluster into accepting more data than its sized for.


  • Probably not feasible in our case. We sell our DB tech based on the sheer IOPS it’s capable of. It already alerts the user if the write-cache is full or the replication cache is backing up too.

    The problem is, at full tilt, a 9 node cluster can take on over 1GB/s in new data. This is fine if the customer is writing over old records and doesn’t require any new space. It’s just that it’s more common that Mr. customer added a new microservice and didn’t think through how much data it requires. Thus causing rapid increase in DB disk space or IOPs that the cluster wasn’t sized for.

    We do have another product line in the works (we call it DBaaS) and that can autoscale because it’s based on clearly defined service levels and cluster specifications. I don’t think that product will have this problem.

    It’s just these super mega special (read: big, important, fortune 100) companies have requirements that mean they need something more hand-crafted. Otherwise we’d have automated the toil by now.