Yeah, not sure I would listen to this guy. Setting up a venv for each project is about a bare minimum for all the teams I’ve worked on.
That being said python env can be GBs in size (especially when doing data science).
Yeah, not sure I would listen to this guy. Setting up a venv for each project is about a bare minimum for all the teams I’ve worked on.
That being said python env can be GBs in size (especially when doing data science).
It is, and it’s stupid. The only real thing they changed this time around is “supine” aiming so you can look 360° while lying down. Overall it’s lower down in the recent cod releases for me. Wouldn’t be bothering with it if it wasn’t free on gamepass.
Treyarch just isn’t as good a developer as the others. The black ops games always seem to lack polish. I’d probably not bother if it wasn’t for free on gamepass, but I’ve been debating dropping gamepass and wouldn’t buy it outright.
I’m confused by this, I didn’t think mw3 was received poorly and playing blops6 I definitely think mw3 was better.
Yeah, basically summed up to “were too small an operation or lazy to manage our data, so a improved search/summary tool works for us”. This kind of approach isn’t going to work in a loooot of environments and there is a lot of value in consistent and reliable data.
Rayman legends is an amazing platformer and I would argue the music levels in that game far surpass anything in Mario Wonder. It’s legitimately a great series and if you haven’t you should check some of the games out.
Definitely worth playing. While short, probably beatable in roughly 5 hours, it’s a lot of fun and makes sure to not overstay it’s welcome.
If you want to feel like a musketeer or classic swashbuckler, this is the game for you!
Threads all run on the same core, processes can run on different cores.
Because threads run on the same core, the only time they can improve performance is if there are non-cpu tasks in your code - usually I/O operations. Otherwise the only thing multi threading can provide is the appearance of parallelism (as the cpu jumps back and forth between threads progressing each in small steps).
On the other hand, multiprocessing allows you to run code on different cores, meaning you can take full advantage of all your processing power. However, if youre program has a lot of I/O tasks, you might end up bottlenecked by the I/O and never see any improvements.
For the example you mentioned, it’s likely threading would be the best as it’s got a little less overhead, easier to program, and you’re task is mostly I/O bound. However, if the calculations are relatively quick, it’s possible you wouldn’t see any improvement as the cpu would still end up waiting for the I/O.
I’d take your word on it, OS level security is not my forte. The main thing I was calling out is that the change seems to be looking to actually fix an issue and not limit control, as the original commenter seemed to imply.
To be honest, it actually does sound like a reasonable and security focused change. It basically looks to take a more zero trust kind of approach in regards to admin elevation.
My understanding is that it’s a difficult feature to support and they can’t guarantee it works well. That’s the only explanation I’ve ever seen, cause to me it’s almost critical for working on a laptop.
I dont get why hibernate isn’t a more popular feature, I use it extensively as I hate having to set everything back up on each restart.
Its also one of my biggest issues with using Linux as it’s usually broken there.
Yeah that’s right, seems my link didn’t populate right.
I appreciate the response!
I’ve definitely used tools like LocalStack before and when it works it’s great, but sadly doesn’t usually provide a 1-to-1 replacement.
Seeing your different approaches is helpful and I will have to see what elements I can pull into my current projects!
Hey OP, it looks like you’re the author of the post? If so I’m curious how you handle cloud services like AWS or Azure when taking this approach? One of the major issues I’ve run into when working with teams is how to test or evaluate against cloud services without creating an entire infrastructure in the cloud for testing.
Definitely sounds like it could be real. If I had to guess their mounting a drive (or another partition) and it’s defaulting to read only. When restarting it resets the original permissions as they only updated the file permissions, but not the mount configuration.
Also reads like some of my frustrations when first getting into Linux (and the issues I occasionally run into still).
I think you’re missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).
AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it’s prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.
Here’s an easy way we’re different, we can learn new things. LLMs are static models, it’s why they mention the cut off dates for learning for OpenAI models.
Another is that LLMs can’t do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it’s almost guaranteed to fail.
Yes, they are very impressive models, but they’re a long way from AGI.
I feel like one of those isn’t like the others