• Nibodhika@lemmy.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    3 months ago

    Yes, I’m stuck with the paradigm that computers are not intelligent and can’t understand what I mean, there’s a term for a software that can: “AGI”.

    Any programmer knows that using LLMs for programming is one of the following cases:

    • It’s not used in any meaningful way, i.e. it generates boilerplate code like getters and setters or used instead of Google.
    • It makes the team take longer, because they need to scrutinize and understand what was generated.
    • It makes a shitty version of the program because it doesn’t understand the need or the corner cases

    Only people who don’t understand programming think LLMs are useful as they are now, until computers are actually intelligent and actually understand what you’re asking them to do and think on all of the corner cases and take decisions for all of the things you didn’t specifically ask it to consider those 3 cases will be the only outcome for “AI” in programming.

    Don’t believe me? Tell me a prompt you would use to generate a useful program (i.e. not a Hello World) and I’ll list multiple assumptions that you’re making about how this needs to work that you did not include in your prompt therefore the “AI” will not cover.