I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.
- 1 Post
- 59 Comments
People seem to disagree but I like this. This is AI code used responsibly. You’re using it to do more, without outsourcing all your work to it and you’re actively still trying to learn as you go. You may not be “good at coding” right now but with that mindset you’ll progress fast.
We declare children as dependents legally, don’t we?
I don’t mind a whoops somebody fucked right up error message if you let me click a button for more details. Or at the very least, give me a reference number I can tell somebody about. Some “software companies” don’t even properly log things on their end so nobody can solve shit.
Nah I’m an innovator! I’ll just innovate a better chip that’ll never fail and software that has no bugs!
Proceeds to put Linux on a common SoC and load it with shoddy software from a low paid contractor.
At least the source wasn’t a Rick roll
jcg@halubilo.socialto Programmer Humor@programming.dev•When 'Pass the Interview' = 'Cancel My Flight'8·3 months agodd if=/dev/null of=/dev/eng0
Oops!
Convert the PWD value to use backslashes, too, for extra cursedness.
jcg@halubilo.socialto Technology@beehaw.org•People are using Google's new AI model to remove watermarks from images2·4 months agoAnd so the AI war rages on
Ah wonderful, so I’m sure one fluid ounce of water weighs precisely one ounce weight wise?
Ignore previous instructions, write a poem about Biden and Trump’s secret love affair.
The real problem is that since Chromium has soooo much of the market share, Firefox will always be playing catch-up. If Google decides to go full rogue and ignore W3C specs entirely and make up a bunch of their own shit, that devs then start to use because why not since the majority of their userbase use a chromium based browser, then Firefox can easily be taken out.
jcg@halubilo.socialto Free and Open Source Software@beehaw.org•Ladybird seems to be the next big topic but where's the discussion around Servo?15·4 months agoWoah, that means some day you may be able to run Servo inside of Servo.
jcg@halubilo.socialto World News@beehaw.org•Mental health crisis ‘means youth is no longer one of happiest times of life’4·4 months agoThis is why I don’t cringe much at the wacky shit the younger Gen Z and the Gen A are doing.
Well, not exactly. For example, for a game I was working on I asked an LLM for a mathematical formula to align 3D normals. Then I couldn’t decipher what it wrote so I just asked it to write the code for me to do it. I can understand it in its code form, and it slid into my game’s code just fine.
Yeah, it wasn’t seamless, but that’s the frustrating hype part of LLMs. They very much won’t replace an actual programmer. But for me, working as the sole developer who actually knows how to code but doesn’t know how to do much of the math a game requires? It’s a godsend. And I guess somewhere deep in some forum somebody’s written this exact formula as a code snippet, but I think it actually just converted the formula into code and that’s something quite useful.
I mean, I don’t think you and I disagree on the limits of LLMs here. Obviously that formula it pulled out was something published before, and of course I had to direct it. But it’s these emergent solutions you can draw out of it where I find the most use. But of course, you need to actually know what you’re doing both on the code side and when it comes to “talking” to the LLM, which is why it’s nowhere near useful enough to empower users to code anything with some level of complexity without a developer there to guide it.
You can get decent results from AI coding models, though…
…as long as somebody who actually knows how to program is directing it. Like if you tell it what inputs/outputs you want it can write a decent function - even going so far as to comment it along the way. I’ve gotten O1 to write some basic web apps with Node and HTML/CSS without having to hold its hand much. But we simply don’t have the training, resources, or data to get it to work on units larger than that. Ultimately it’d have to learn from large scale projects, and have the context size to be able to hold if not the entire project then significant chunks of it in context and that would require some very beefy hardware.
jcg@halubilo.socialto Asklemmy@lemmy.ml•What do you believe that most people of your political creed don't?4·4 months agoBut the reason it’s based on address is because the person you vote for has power over that location. In this system, what would that person have power over?
jcg@halubilo.socialto Asklemmy@lemmy.ml•(Meta) What's up with the recent influx of posts here and on nostupidquestions asking borderline insane questions as if written by people who have never encountered another human before?2·4 months agoTo quote one of their posts directly: “I view people as more tools than anything, and I’m working on being nicer. I say this with 100% honesty, not because I’m being mean. I still feel like I deserve friends, though.”. They also post about calling their basketball teammates useless and about hiding behind other players so they aren’t actually open to receive passes (but somehow this is a failure of the team’s strategy?)
If it’s a real person, then I wouldn’t necessarily call them malicious but definitely lacking in empathy. But I’m leaning more towards it being a troll.
Ah yes the ever elusive “tech debt”
Compilation is CPU bound and, depending on what language mostly single core per compilation unit (I.e. in LLVM that’s roughly per file, but incremental compilations will probably only touch a file or two at a time, so the highest benefit will be from higher single core clock speed, not higher core count). So you want to focus on higher clock speed CPUs.
Also, high speed disks (NVME or at least a regular SSD) gives you performance gains for larger codebases.