Do chatgpt or other language models help you code more efficiently and faster? Is it worth spending your money for it?

    • Exocrinous@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      The reason LLM is like a child is that it doesn’t have critical thinking. It doesn’t think before it opens its mouth. LLM is the part of your brain that comes up with the first, most obvious answer. It’s Wernicke’s area going wild.

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    16
    arrow-down
    1
    ·
    edit-2
    9 months ago

    AI chatbots are sometimes quicker than using official library documentation. I daresay usually quicker, for anything but documentation that I know really well already.

    I haven’t spent my own money on a development tool in a long time, but I find it worth a few of my employer’s dollars.

    It’s hardly life-changing, but it’s convenient.

    I can’t comment on it’s mistakes or hallucinations, because I am a godlike veteran programmer - I can exit Vim - and so I - so far - have immediately recognized when the AI is off track, and have been able to trivially guide it back toward the solution I’m looking for.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    edit-2
    9 months ago

    The chatbot version? Meh, sometimes, but I don’t use it often.

    The IDE integrated autocompletion?

    I’ll stab the MFer that tries to take that away.

    So much time saved for things that used to just be the boring busywork parts of coding.

    And while it doesn’t happen often, the times it preempts my own thinking for what to do next is magic feeling.

    I often use the productivity hack of leaving a comment for what I’m doing next when I start my next day, and it’s very cool when I sit down to start work and see a completion that’s 80% there. Much faster to get back into the flow.

    I will note that I use it in a mature codebase, so it matches my own style and conventions. I haven’t really used it in fresh projects.

    Also AMAZING when working with popular APIs or libraries I’m adding in for the first time.

    Edit: I should also note that I have over a decade of experience, so when it gets things wrong it’s fairly obvious and easily fixed. I can’t speak to how useful or harmful it would be as a junior dev. I will say that sometimes when it is wrong it’s because it is trying to follow a more standard form of a naming convention in my code vs an exception, and I have even ended up with some productive refractors prompted by its mistakes.

  • Saigonauticon@voltage.vn
    link
    fedilink
    English
    arrow-up
    8
    ·
    9 months ago

    Not really. Writing code is the easy part. It’s not the rate limiting step. The hard part is getting requirements out of customers, who rarely know what they want. I don’t need to push out more code and features faster, that would make things into unmaintainable spaghetti.

    I might send it a feature list and ask it “what features did they forget?” or “Can you suggest more features?”, or even better – “which features are the least important for X and can be eliminated?”. In other words, let it do the job of middle-management and I’ll just do the coding myself.

    Anyway, ChatGPT blocks my country (I’ve confirmed it’s on their end).

  • hubobes@sh.itjust.works
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    9 months ago

    Mostly as a search engine. I have it set up to only respond with answers it has web sources for. Code completion like Copilot can be useful, however 90% of the completions aren’t really saving me any time, the other 10% are awesome though.

    So I could easily drop copilot but ChatGPT or HuggingChat used like search engines are awesome.

  • intensely_human@lemm.ee
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    9 months ago

    Absolutely. I just built a little proof of concept thing where I loaded some GIS data into a google map to display the major rivers of the world.

    ChatGPT, the v4 that I pay $20/mo for, was like someone with deep knowledge of all the technologies and APIs involved.

    I’m gonna post a link to screenshots of the convo so you can see exactly how it went.

    • intensely_human@lemm.ee
      link
      fedilink
      arrow-up
      9
      arrow-down
      1
      ·
      9 months ago

      Not the whole thing because it’s longer than I remember.

      https://imgur.com/a/Jh5BkMZ

      But just consider how long it would have taken me to answer each of those questions just by googling and reading old forums and stack overflow posts.

      Much like sitting next to someone with experience, a question that could take me hours to answer on the internet took me only seconds to answer by asking directly. GPT’s responses are still long, so it’s not pure conversational style, but the longer responses aren’t wasted fluff. It’s all relevant to what I asked.

      Natural language as a way to query a knowledge base is enormously useful. Especially for something that requires update of existing knowledge as often as tech work.

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        5
        ·
        9 months ago

        Natural language as a way to query a knowledge base is enormously useful.

        Great post. I want to highlight your sentence above as a key point, for folks trying to come to grips with where and how to use the current generation of AI.

    • Socsa@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      9 months ago

      Yes, by far the most useful thing is stuff like API and keyword documentation for poorly documented code. Its literally the promise of self generating docs for tedious shit.

  • Gamma@beehaw.org
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    9 months ago

    Yep! It’s the best autocorrect I’ve ever used, and it does a decent job explaining config files when needed. Just don’t let any unvetted code in because it can have some quirky bugs

  • detectivemittens@beehaw.org
    link
    fedilink
    arrow-up
    6
    ·
    9 months ago

    Yes and no. I compare it to a graphing calculator: I know how to graph a parabola by hand already, but I don’t want to have to do it over and over already. That’s just busy work for me.

    LLMs are similar that way. There’s often a lot of boilerplate to get out of the way that’s just busy work to write over and over again. LLMs are great at generating some of that scaffolding.

    LLMs have also become a lot more helpful as Google search has gotten worse over time.

  • YourAvgMortal@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    9 months ago

    As someone who is just getting started in a new language (rust), it can be very helpful when trying to figure out why something doesn’t work, or maybe some tips I don’t know (even if gets confused sometimes).

    However, for my regular languages and work, I imagine it would be a lot slower.

  • d0ntpan1c@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    9 months ago

    I tried to use Copilot but it just kept getting in the way. The advanced autofill was nice sometimes, but its not like i’m making a list of countries or some mock data that often…

    As far as generated code… especially with html/css/js frontend code it consistently output extremely inaccessible code. Which is baffling considering how straightforward the MDN, web.dev, and WCAG docs are. (Then again, LLMs cant really understand when an inaccessable pattern is used to demonstrate an onclick instead of a semantic a or to explain aria-* attributes…)

    It was so bad so often that I dont use it much for languages I’m unfamiliar with either. If it puts out garbage where i’m an expert, i dont want to be responsible for it when I have no knowledge.

    I might consider trying a LLM thats much more tuned to a single languge or purpose. I don’t really see these generalized ones being popular long run, especially once the rose-tinted glasses come off.

  • Omega_Haxors@lemmy.ml
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    9 months ago

    I’m pretty sure even if it was helpful they wouldn’t use it out of principle. Shit’s basically plagiarism laundering.

    EDIT: Oh you’re talking about devs who use Lemmy, not the Lemmy devs.

  • thepiguy@lemmy.ml
    link
    fedilink
    arrow-up
    4
    ·
    9 months ago

    I mostly use shell-gpt and ask it trivial questions. Saves me the time for switching to a browser. I have it always running in a tmux pane. As for code, I found it helpful for getting started when writing a functionality, but the actual engineering part should be done manually imo. As for spending money on it, depends on how you benifit from it. I spend about 50c on my openai API key, but I know a friend who used ollama (I think with some mistral derivative) locally on a gaming laptop with decent enough results.

  • SecretPancake@feddit.de
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    It’s sometimes helpful when working with libraries that are not well documented. Or to write some very barebones and not super useful tests if I’m that lazy. But I’m not going to let it code for me. The results suck and I don’t want to become a „prompt engineer“.

  • m-p{3}@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    More of a hobbyist, but it helps finding that typo I’ve made earlier that went unnoticed. And for command-line utilities it’s nice being able to ask what you want to do and it provides the parameters you’re looking for right away.

  • XEAL@lemm.ee
    link
    fedilink
    arrow-up
    3
    ·
    9 months ago

    I’m no real dev, but yes.

    Even the free version is helpful.