By “good” I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

  • edgemaster72@lemmy.world
    link
    fedilink
    arrow-up
    67
    ·
    edit-2
    4 months ago

    understanding what the machine spits out

    This is exactly why people will still need to learn to code. It might write good code, but until it can write perfect code every time, people should still know enough to check and correct the mistakes.

      • 667@lemmy.radio
        link
        fedilink
        English
        arrow-up
        12
        ·
        edit-2
        4 months ago

        I used an LLM to write some code I knew I could write, but was a little lazy to do. Coding is not my trade, but I did learn Python during the pandemic. Had I not known to code, I would not have been able to direct the LLM to make the required corrections.

        In the end, I got decent code that worked for the purpose I needed.

        I still didn’t write any docstrings or comments.

        • Em Adespoton@lemmy.ca
          link
          fedilink
          arrow-up
          9
          ·
          4 months ago

          I would not trust the current batch of LLMs to write proper docstrings and comments, as the code it is trained on does not have proper docstrings and comments.

          And this means that it isn’t writing professional code.

          It’s great for quickly generating useful and testable code snippets though.

          • GBU_28@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            4 months ago

            It can absolutely write a docstring for a provided function. That and unit tests are like some of the easiest things for it, because it has the source code to work from

    • visor841@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      For a very long time people will also still need to understand what they are asking the machine to do. If you tell it to write code for an impossible concept, it can’t make it. If you ask it to write code to do something incredibly inefficiently, it’s going to give you code that is incredibly inefficient.

    • scarabic@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      4 months ago

      I’ve even seen human engineers’ code thrown out because no one else could understand it. Back in the day, one webdev took it upon himself to whip up a mobile version of our company’s very complex website. He did it as a side project. It worked. It was complete. It looked good. It was very fast. The code was completely unreadable by anyone else. We didn’t use it.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    41
    ·
    4 months ago

    After a certain point, learning to code (in the context of application development) becomes less about the lines of code themselves and more about structure and design. In my experience, LLMs can spit out well formatted and reasonably functional short code snippets, with the caveate that it sometimes misunderstands you or if you’re writing ui code, makes very strange decisions (since it has no special/visual reasoning).

    Anyone a year or two of practice can write mostly clean code like an LLM. But most codebases are longer than 100 lines long, and your job is to structure that program and introduce patterns to make it maintainable. LLMs can’t do that, and only you can (and you can’t skip learning to code to just get on to architecture and patterns)

    • jacksilver@lemmy.world
      link
      fedilink
      arrow-up
      7
      ·
      edit-2
      4 months ago

      I think this is the best response in this thread.

      Software engineering is a lot more than just writing some lines of code and requires more thought and planning than can be realistically put into a prompt.

      • Em Adespoton@lemmy.ca
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        The other thing is, an LLM generally knows about all the existing libraries and what they contain. I don’t. So while I could code a pretty good program in a few days from first principles, an LLM is often able to stitch together some elegant glue code using a collection of existing library functions in seconds.

    • netvor@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Also in my experience LLM can often propose solutions which are working but way too complex.

      Story time: just yesterday, in VueJS I was trying to iterate over a list of items and render .text of reach item as HTML, but I needed to process it first. Note that in VueJS this is done by adding eg. <span v-html="item.text"></span> where content of the attribute is the JavaScript expression needed to get the text.

      First I asked ChatGPT to write the function for processing the text. That worked pretty well and even used part of the JavaScript API which I was not aware about.

      Next, I had a “dumb moment” when I did not realize that as I’m iterating through items I can just say <span v-html="processHtml(item.text)"></span>, that’s all I really needed. Somehow I thought (or should I say, “hallucinated”, ba dum tsss) for a moment that v-html is special or something (it is used differently than the most abundant type of syntax). So I went ahead and asked ChatGPT how to render processed texts while iterating.

      It came with a rather contrived solution which involved creating another computed property containing a list of processed texts. I started to integrate it into the existing loop: I would have to add index and use that index to pull the code from the computed property, which already felt a little bit weird.

      That’s when it struck me: no, no, no, I can just f*ing use the function.

      TL; DR: The point is, while ChatGPT was helpful I still needed to babysit it. And if I didn’t snap from my lazy moment, or if I simply didn’t know better, I would end up with code which is more complex, more surprising, which means harder to reason about for both humans and LLM’s. (For humans because now it forces you to speculate about coder’s intent, and for LLM’s because it’s less likely to be reminiscent of surrounding code in its learning data.)

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    34
    arrow-down
    4
    ·
    edit-2
    4 months ago

    Great question.

    is there any legit reason anyone should learn advanced coding techniques?

    Don’t buy the hype. LLMs can produce all kinds of useful things but they don’t know anything at all.

    No LLM has ever engineered anything. And there’s no sparse (concession to a good point made in response) current evidence that any AI ever will.

    Current learning models are like trained animals in a circus. They can learn to do any impressive thing you an imagine, by sheer rote repetition.

    That means they can engineer a solution to any problem that has already been solved millions of times already. As long as the work has very little new/novel value and requires no innovation whatsoever, learning models do great work.

    Horses and LLMs that solve advanced algebra don’t understand algebra at all. It’s a clever trick.

    Understanding the problem and understanding how to politely ask the computer to do the right thing has always been the core job of a computer programmer.

    The bit about “politely asking the computer to do the right thing” makes massive strides in convenience every decade or so. Learning models are another such massive stride. This is great. Hooray!

    The bit about “understanding the problem” isn’t within the capabilities of any current learning model or AI, and there’s no current evidence that it ever will be.

    Someday they will call the job “prompt engineering” and on that day it will still be the same exact job it is today, just with different bullshit to wade through to get it done.

    • chknbwl@lemmy.worldOP
      link
      fedilink
      arrow-up
      6
      ·
      4 months ago

      I appreciate your candor, I had a feeling it was cock and bull but you’ve answered my question fully.

    • ConstipatedWatson@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Wait, if you can (or anyone else chipping in), please elaborate on something you’ve written.

      When you say

      That means they can engineer a solution to any problem that has already been solved millions of times already.

      Hasn’t Google already made advances through its Alpha Geometry AI?? Admittedly, that’s a geometry setting which may be easier to code than other parts of Math and there isn’t yet a clear indication AI will ever be able to reach a certain level of creativity that the human mind has, but at the same time it might get there by sheer volume of attempts.

      Isn’t this still engineering a solution? Sometimes even researchers reach new results by having a machine verify many cases (see the proof of the Four Color Theorem). It’s true that in the Four Color Theorem researchers narrowed down the cases to try, but maybe a similar narrowing could be done by an AI (sooner or later)?

      I don’t know what I’m talking about, so I should shut up, but I’m hoping someone more knowledgeable will correct me, since I’m curious about this

      • MajorHavoc@programming.dev
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Isn’t this still engineering a solution?

        If we drop the word “engineering”, we can focus on the point - geometry is another case where rote learning of repetition can do a pretty good job. Clever engineers can teach computers to do all kinds of things that look like novel engineering, but aren’t.

        LLMs can make computers look like they’re good at something they’re bad at.

        And they offer hope that computers might someday not suck at what they suck at.

        But history teaches us probably not. And current evidence in favor of a breakthrough in general artificial intelligence isn’t actually compelling, at all.

        Sometimes even researchers reach new results by having a machine verify many cases

        Yes. Computers are good at that.

        So far, they’re no good at understanding the four color theorum, or at proposing novel approaches to solving it.

        They might never be any good at that.

        Stated more formally, P may equal NP, but probably not.

        Edit: To be clear, I actually share a good bit of the same optimism. But I believe it’ll be hard won work done by human engineers that gets us anywhere near there.

        Ostensibly God created the universe in Lisp. But actually he knocked most of it together with hard-coded Perl hacks.

        There’s lots of exciting breakthroughs coming in computer science. But no one knows how long and what their impact will be. History teaches us it’ll be less exciting than Popular Science promised us.

        Edit 2: Sorry for the rambling response. Hopefully you find some of it useful.

        I don’t at all disagree that there’s exciting stuff afoot. I also think it is being massively oversold.

      • metiulekm@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 months ago

        Hasn’t Google already made advances through its Alpha Geometry AI?? Admittedly, that’s a geometry setting which may be easier to code than other parts of Math and there isn’t yet a clear indication AI will ever be able to reach a certain level of creativity that the human mind has, but at the same time it might get there by sheer volume of attempts.

        Wanted to focus a bit on this. The thing with AlphaGeometry and AlphaProof is that they really treat doing math as a game, not unlike chess. For example, AlphaGeometry has a basic set of rules, it can apply them and it knows when it is done. And when it is done, you can be 100% sure that the solution is correct, because the rules of the game are known; the 28/42 score reported in the article is really four perfect scores and three zeros. Those systems do use LLMs, but they really are only there to suggest to the system what to try doing next. There is a very enlightening picture in the AlphaGeometry paper here: https://www.nature.com/articles/s41586-023-06747-5#Fig1

        You can automatically verify correctness of code the same way. For example Lean, the language AlphaProof uses internally, can be used for general programming. In general, we call similar programming techniques formal methods. But most people don’t do this, since this is more time-consuming than normal programming, and in many cases we don’t even know how to define the goal of our code (how to define correct rendering in a game?). So this is only really done when the correctness of the program is critical, like famously they verified the code of the automatic metro in Paris this way. And so most people don’t try to make programming AI work this way.

    • SolOrion@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      16
      ·
      4 months ago

      That’s some 40k shit.

      “What does it mean?” “I do not know, but it appeases the machine spirit. Quickly, recite the canticles.”

      • RebekahWSD@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        This is directly how we’re getting to a 40k future and I hate it. The bad future!

        If we must I might join the Mechanicus though. I’m good at chanting and doing things by rote.

    • finestnothing@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      4 months ago

      My CTO thoroughly believes that within 4-6 years we will no longer need to know how to read or write code, just how to ask an AI to do it. Coincidentally, he also doesn’t code anymore and hasn’t for over 15 years.

      • recapitated@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        4 months ago

        From a business perspective, no shareholder cares at how good an employee is at personally achieving a high degree of skill. They only care about selling and earning, and to a lesser degree an enduring reputation for longer term earnings.

        Economics could very well drive this forward. But I don’t think the craft will be lost. People will need to supervise this progress as well as collaborate with the machines to extend its capabilities and dictate its purposes.

        I couldn’t tell you if we’re talking on a time scale of months or decades, but I do think “we” will get there.

        • whyrat@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 months ago

          Hackers and hobbiests will persist despite any economics. Much of what they do I don’t see AI replacing, as AI creates based off of what it “knows”, which is mostly things it has previously ingested.

          We are not (yet?) at the point where LLM does anything other than put together code snippets it’s seen or derived. If you ask it to find a new attack vector or code dissimilar to something it’s seen before the results are poor.

          But the counterpoint every developer needs to keep in mind: AI will only get better. It’s not going to lose any of the current capabilities to generate code, and very likely will continue to expand on what it can accomplish. It’d be naive to assume it can never achieve these new capabilities… The question is just when & how much it costs (in terms of processing and storage).

          • recapitated@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            4 months ago

            Agree, and the point I always want to make is that any LLM or neural net or any other AI tech is going to be a mere component in a powerful product, rather than the entirety of the product.

            The way I think of it is that my brain is of little value without my body, and my person is of little value without my team at work. I don’t exist in a vacuum but I can be highly productive within my environment.

      • Bilb!@lem.monster
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        4 months ago

        I think he’s correct and there’s a ton of cope going on on lemmy right now. I also think tons of art/graphic design jobs will disappear never to return.

    • Angry_Autist (he/him)@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      2
      ·
      4 months ago

      Don’t be, there will come a time when nearly all code is AI created, and not human readable.

      You need to worry for the future when big data sites are running code they literally don’t know how it works and have no way to verify because of how cheap and relatively effective it is.

      Then after that LLMs will get better at coding than any human can achieve, but will still be black box human unreadable code but there will be no chain of discipline left to teach new programmers.

      Hardly anyone is taking this seriously because corporations stand to make a fucktonne of money and normal people are in general clueless about complex subjects that require a nuanced understanding, yet strangely arrogant about their ignorant opinions based on movies and their drinking buddies malformed opinions.

  • Nomecks@lemmy.ca
    link
    fedilink
    arrow-up
    23
    arrow-down
    1
    ·
    4 months ago

    I use it to write code, but I know how to write code and it probably turns a week of work for me into a day or two. It’s cool, but not automagic.

      • Nomecks@lemmy.ca
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Ask it to make a function, then do some other function, then make them work together etc. Making it write a lot in one go won’t work. It’s more pair programming than having it write for you.

  • gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    4 months ago

    LLMs are just computerized puppies that are really good at performing tricks for treats. They’ll still do incredibly stupid things pretty frequently.

    I’m a software engineer, and I am not at all worried about my career in the long run.

    In the short term… who fucking knows. The C-suite and MBA circlejerk seems to have decided they can fire all the engineers because wE CAn rEpLAcE tHeM WitH AI 🤡 and then the companies will have a couple absolutely catastrophic years because they got rid of all of their domain experts.

  • recapitated@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    4 months ago

    I’m my experience they do a decent job of whipping out mindless minutea and things that are well known patterns in very popular languages.

    They do not solve problems.

    I think for an “AI” product to be truly useful at writing code it would need to incorporate the LLM as a mere component, with something facilitating checks through static analysis and maybe some other technologies, maybe even mulling the result through a loop over the components until they’re all satisfied before finally delivering it to the user as a proposal.

    • Croquette@sh.itjust.works
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      It’s a decent starting point for a new language. I had to learn webdev as an embedded C coder, and using a LLM and cross-referencing the official documentation makes a new language much more approachable.

      • recapitated@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        4 months ago

        I agree, LLMs have been helpful in pointing me in the right direction and helping me rethink what questions I actually want to ask in disciplines I’m not very familiar with.

    • thanks_shakey_snake@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Those kinds of patterns are already emerging! That “mulling the result through a loop” step is called “reflection,” and it does a great job of catching mistakes and hallucinations. Nothing is on the scale of doing the whole problem-solving and implementation from business requirements to deployed product-- probably never will be, IMO-- but this “making the LLM a component in a broader system with diverse tools” is definitely something that we’re currently figuring out patterns for.

  • nous@programming.dev
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    4 months ago

    They can write good short bits of code. But they also often produce bad and even incorrect code. I find it more effort to read and debug its code then just writing it myself to begin with the vast majority of the time and find overall it just wastes more of my time overall.

    Maybe in a couple of years they might be good enough. But it looks like their growth is starting to flatten off so it is up for debate as to if they will get there in that time.

  • xmunk@sh.itjust.works
    link
    fedilink
    arrow-up
    10
    ·
    4 months ago

    No, a large part of what “good code” means is correctness. LLMs cannot properly understand a problem so while they can produce grunt code they can’t assemble a solution to a complex problem and, IMO, it is impossible for them to overtake humans unless we get really lazy about code expressiveness. And, on that point, I think most companies are underinvesting into code infrastructure right now and developers are wasting too much time on unexpressive code.

    The majority of work that senior developers do is understanding a problem and crafting a solution appropriate to it - when I’m working my typing speed usually isn’t particularly high and the main bottleneck is my brain. LLMs will always require more brain time while delivering a savings on typing.

    At the moment I’d also emphasize that they’re excellent at popping out algorithms I could write in my sleep but require me to spend enough time double checking their code that it’s cheaper for me to just write it by hand to begin with.

    • A_A@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      4 months ago

      Yes … and it doesn’t know when it is on time.
      Also, machines are getting better and they can help us with inspiration.

  • GBU_28@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    4 months ago

    For basic boiler plate like routes for an API, an etl script from sample data to DB tables, or other similar basics, yeah, it’s perfectly acceptable. You’ll need to swap out dummy addresses, and maybe change a choice or two, but it’s fine.

    But when you’re trying to organize more complicated business logic or debug complicated dependencies it falls over

  • Ookami38@sh.itjust.works
    link
    fedilink
    arrow-up
    6
    ·
    4 months ago

    Of course it can. It can also spit out trash. AI, as it exists today, isn’t meant to be autonomous, simply ask it for something and it spits it out. They’re meant to work with a human on a task. Assuming you have an understanding of what you’re trying to do, an AI can probably provide you with a pretty decent starting point. It tends to be good at analyzing existing code, as well, so pasting your code into gpt and asking it why it’s doing a thing usually works pretty well.

    AI is another tool. Professionals will get more use out of it than laymen. Professionals know enough to phrase requests that are within the scope of the AI. They tend to know how the language works, and thus can review what the AI outputs. A layman can use AI to great effect, but will run into problems as they start butting up against their own limited knowledge.

    So yeah, I think AI can make some good code, supervised by a human who understands the code. As it exists now, AI requires human steering to be useful.

  • bionicjoey@lemmy.ca
    link
    fedilink
    arrow-up
    12
    arrow-down
    6
    ·
    4 months ago

    This question is basically the same as asking “Are 2d6 capable of rolling a 9?”

      • etchinghillside@reddthat.com
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        4 months ago

        Wouldn’t exactly take the comment as negative.

        The output of current LLMs is hit or miss sometimes. And when it misses you might find yourself in a long chain of persuading a sassy robot into writing things as you might intend.

      • bionicjoey@lemmy.ca
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        4 months ago

        Sorry, I wasn’t trying to berate you. Just trying to illustrate the underlying assumption of your question

    • etchinghillside@reddthat.com
      link
      fedilink
      arrow-up
      7
      ·
      4 months ago

      Yes, two six-sided dice (2d6) are capable of rolling a sum of 9. Here are the possible combinations that would give a total of 9:

      • 3 + 6
      • 4 + 5
      • 5 + 4
      • 6 + 3

      So, there are four different combinations that result in a roll of 9.

      See? LLMs can do everything!

        • Fonzie!@ttrpg.network
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          4 months ago

          I asked four LLM-based chatbots over DuckDuckGo’s anonymised service the following:

          “How many r’s are there in Strawberry?”


          GPT-4o mini

          There are three “r’s” in the word “strawberry.”

          Claude 3 Haiku

          There are 3 r’s in the word “Strawberry”.

          Llama 3.1 70B

          There are 2 r’s in the word “Strawberry”.

          Mixtral 8x7B

          There are 2 “r” letters in the word “Strawberry”. Would you like to know more about the privacy features of this service?


          They got worse at the end, but at least GPT and Claude can count letters.

  • DeLacue@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    4 months ago

    That all depends on where the data set comes from. The code you’ll get out of an LLM is the average code of the data set. If it’s scraped from the internet (which is very likely) the code you’ll get will be an amalgam of concise examples from one website, incorrect examples from another, bits from blogs with all the typos and all the gunk and garbage that’s out there.

    Getting LLM code to work well takes an understanding of what the code it gives you actually does and why it’s bad. It will always be bad because it cannot be better than the dataset and in order for a dataset to be big enough to train an LLM it’ll have to have everything they can get including all the trash. But it can be good for providing you a framework to start with. It is however never going to replace actual programming and understanding of programming. The talk of LLMs completely replacing programers is mostly coming from people who do not understand coding or LLMs at all.

    • GrammarPolice@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      6
      ·
      4 months ago

      Can’t LLM’s eventually gain some form of “sentience”, and be able to self correct? A sort of thinking before speaking kind of situation.

      • DeLacue@lemmy.world
        link
        fedilink
        arrow-up
        10
        ·
        4 months ago

        This question right here perfectly encapsulates everything wrong with LLMs right now. They could be good tools but the people pushing them have no idea what they even are. LLMs do not make decisions. All the decisions an LLM appears to make were made in the dataset. All those things that an LLM does that make it seem intelligent were done or said by a human somewhere on the internet. It is a statistical model that determines what output is mostly likely to come next. That is it. It is nothing else. It is not smart. It does not and cannot make decisions. It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

        Listen think of it like this; a man decides to take exams to become a doctor in France, but for some reason he doesn’t learn either french or medicine. No, no instead he studies every former exam and all the answers to them. He gets very good at regurgitating those answers so much so that he can even pass the exam. But at no point does he understand what any of it means and when asked new and novel questions he provides utter nonsense answers. No matter how good he gets at memorising those answers he will never get any better at medicine. LLMs are as likely to gain sentience as my excel spreadsheets are.

        • hikaru755@lemmy.world
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          4 months ago

          It is an algorithm that searches a dataset and when it can’t find something it’ll provide convincing-looking gibberish instead.

          This is very misleading. An LLM doesn’t have access to its training dataset in order to “search” it. Producing convincing looking gibberish is what it always does, that’s its only mode of operation. The key is that the gibberish that comes out of today’s models is so convincing that it actually becomes broadly useful.

          That also means that no, not everything an LLM produces has to have been in its training dataset, they can absolutely output things that have never been said before. There’s even research showing that LLMs are capable of creating actual internal models of real world concepts, which suggests a deeper kind of understanding than what the “stochastic parrot” moniker wants you to believe.

          LLMs do not make decisions.

          What do you mean by “decisions”? LLMs constantly make decisions about which token comes next, that’s all they do really. And in doing so, on a higher, emergent level they can make any kind of decision that you ask them to, the only question is how good those decisions are going be, which in turn entirely depends on the training data, how good the model is, and how good your prompt is.