Hi, I am a computer nerd. I also took a computer programming class and got the highest score in the class, but I never followed up with advanced classes. Recently, I’ve thought of different ideas for software I’d like to try to create. I’ve heard about vibe coding. I know real programmers make fun of it, but I also have heard so much about it and people using it and paying for it that I have a hard time believing it writes garbage code all the time.

However, whenever I am trying to do things in linux and don’t know how and ask an LLM, it gets it wrong like 85% of the time. Sometimes it helps, but a lot of times it’s fucking stupid and just leads me down a rabbit hole of shit that won’t work. Is all vibe coding actually like that too or does some of it actually work?

For example, I know how to set up a server, ssh in, and get some stuff running. I have an idea for an App and since everyone uses smart phones (unfortunately), I’d probably try to code something for a smart phone. But would it be next to impossible for someone like me to learn? I like nerdy stuff, but I am not experienced at all in coding.

I also am not sure I have the dedication to do hours and hours of code, despite possible autism, unless I were highly fucked up, possibly on huge amounts of caffeine or microdosing something. But like, it doesn’t seem impossible.

Is this a rabbit hole worth falling into? Do most Apps just fail all the time? Is making an App nowadays like trying to win a lotto?

It would be cool to hear from real App developers. I am getting laid off, my expenses are low because I barely made anything at my job, I’ll be getting unemployment, and I am hoping I can get a job working 20-30 hours a week and pay for my living expenses, which are pretty low.

Is this a stupid idea? I did well in school, but I’m not sure that means anything. Also, when I was in the programming class, the TA seemed much, much smarter at programming and could intuitively solve coding problems much faster due to likely a higher IQ. I’m honestly not sure my IQ is high enough to code. My IQ is probably around 112, but I also sometimes did better than everyone on tests for some reason, maybe because I’m a nerd. I’m not sure I will have the insight to tackle hard coding problems, but I’m not sure if those actually occur in real coding.

  • 18107@aussie.zone
    link
    fedilink
    English
    arrow-up
    2
    ·
    30 minutes ago

    LLMs are great at language problems. If you’re learning the syntax of a new programming language or you’ve forgotten the syntax for a specific feature, LLMs will give you exactly what you want.

    I frequently use AI/LLMs when switching languages to quickly get me back up to speed. They’re also adequate at giving you a starting point, or a basic understanding of a library or feature.

    The major downfall is if you ask for a solution to a problem. Chances are, it will give you a solution. Often it won’t work at all.
    The real problem is when it does work.

    I was looking for a datatype that could act as a cache (forget the oldest item when adding a new one). I got a beautifully written class with 2 fields and 3 methods.
    After poking at the AI for a while, it realized that half the code wasn’t actually needed. After much more prodding, it finally informed me that there was actually an existing datatype (LinkedHashMap) that would do exactly what I wanted.

    Be aware that AI/LLMs will rarely give you the best solution, and often give you really bad solutions even when an elegant one exists. Use them to learn if you want, but don’t trust them.

  • AdamBomb@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    My bro, your TA wasn’t better at coding because “higher IQ”. They were better because they put in the hours to build the instincts and techniques that characterize an experienced developer. As for LLM usage, my advice is to be aware of what they are and what the aren’t. They are a randomized word prediction engine trained on— among other things— all the publicly available code on the internet. This means they’ll be pretty good at solving problems that it has seen in its training set. You could use it to get things set up and maybe get something partway done, depending on how novel your idea is. An LLM cannot think or solve novel problems, and they also generally will confidently fake an answer rather than say they don’t know something, because truly, they don’t know anything. To actually make it to the finish line, you’ll almost certainly need to know how to finish it yourself, or learn how to as you go.

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    8 hours ago

    The exact definition of vibe coding varies with who you talk to. A software dev friend of mine uses ChatGPt every day in his work and claims it saves him a ton of time. He mostly does db work and node apps right now, and I’m pretty sure the way he uses ChatGPT falls under the heading of vibe coding - using AI to generate code and then going through the code and tweaking it, saving the developer a lot of typing and grunt work.

  • TranquilTurbulence@lemmy.zip
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    8 hours ago

    Vibe coding works, but there are some serious caveats.

    I’ve used LLMs for data visualization and found them helpful for simple tasks, but they will always make serious mistakes with more complex prompts. While they understand syntax and functions well, they usually produce errors that require manual debugging. Vibe coding with LLMs works best if you’re an expert in your project and could write all of the code yourself but just can’t be bothered. Prepare to spend some time fixing the bugs, but it should still be faster than writing all of it yourself.

    If you’re not proficient in using a specific function the LLM generated, vibe coding becomes less effective because debugging can be time consuming. Relying on an LLM to troubleshoot its own code tends to lead to “fixes” that only spawn more errors. The key is to catch these situations early and avoid getting lured into any of the wild goose chases it offers.

  • listless@lemmy.cringecollective.io
    link
    fedilink
    arrow-up
    23
    ·
    18 hours ago

    if you know how to code, you can vibe code because you can immediately see and be confident enough to identify and not use obvious mistakes, oversights, lack of security, and missed edge cases the LLM generated.

    if you don’t know how to code, you can’t vibe code, because you think the LLM is smarter than you and you trust it.

    Imagine saying “I’m a mathematician” because you have a scientific calculator. If you don’t know the difference between RAD and DEG and you just start doing calculations without understanding the unit circle, then building a bridge based on your math, you’re gonna have a bad time.

  • xavier666@lemmy.umucat.day
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    edit-2
    23 hours ago

    Think of LLMs as the person who gets good marks in exams because they memorized the entire textbook.

    For small, quick problems you can rely on them (“Hey, what’s the syntax for using rsync between two remote servers?”) but the moment the problem is slightly complicated, they will fail because they don’t actually understand what they have learnt. If the answer is not present in the original textbook, they fail.

    Now, if you are aware of the source material or if you are decently proficient in coding, you can check their incorrect response, correct it, and make it your own. Instead of creating the solution from scratch, LLMs can give you a push in the right direction. However, DON’T consider their output as the gospel truth. LLMs can augment good coders, but it can lead poor coders astray.

    This is not something specific to LLMs; if you don’t know how to use Stackoverflow, you can use the wrong solution from the list of given solutions. You need to be technically proficient to even understand which one of the solutions is correct for your usecase. Having a strong base will help you in the long run.

    • lepinkainen@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      22 minutes ago

      The main problem with LLMs is that they’re the person who memorised the textbook AND never admit they don’t know something.

      No matter what you ask, an LLM will give you an answer. They will never say “I don’t know”, but will rather spout 100% confident bullshit.

      The “thinking” models are a bit better, but still have the same issue.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    12 hours ago

    Concerning the IQ: App development and regular programming aren’t that hard. It needs some time and dedication, and willingness to learn how all these things work and tie together, but I think everyone with an average IQ could do it. It’s specific domains where you need a high IQ, like writing advanced signal processing algorithms. Or write very efficient algorithms or do detailed security audits. But App development is just moderately complex, you can get away with basic math… So I’d say it’s doable. Still needs quite some time and effort though. At least several weeks to months. And the Kotlin book I have has like 800 pages filled with information, and that just takes some time to work through. None of it is magic, though. You do one chapter at a time.

    Vibe coding is overrated IMO. There are applications and clients out there for whom it’s fine if you just do a piss-poor job and throw something together, and it somehow works enough. For a lot of things it’s not advanced enough, yet.

    • electric_nan@lemmy.ml
      link
      fedilink
      arrow-up
      1
      ·
      32 minutes ago

      You don’t have to be a genius to learn programming, but it actually isn’t for everyone. Some people will never “get it” in any reasonable amount of time studying. Don’t ask me how I know!

  • older_code@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    17 hours ago

    I have successfully written and deployed a number of large complex applications with 100% AI written code, but I micromanage it. I’ve been developing software for 30 years and use AI as a sort of code paintbrush. The trick is managing the AI context window to keep it big enough to understand its task but small enough to not confuse it.

  • Emily (she/her)@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    21
    ·
    21 hours ago

    In my experience, an LLM can write small, basic scripts or equally small and isolated bits of logic. It can also do some basic boilerplate work and write nearly functional unit tests. Anything else and it’s hopeless.

  • FreedomAdvocate@lemmy.net.au
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    19 hours ago

    No, making an app is not just something you can decide you want to do and do it without learning to code.

  • ComfortableRaspberry@feddit.org
    link
    fedilink
    arrow-up
    20
    arrow-down
    1
    ·
    edit-2
    24 hours ago

    I use it as a friendlier version of stackoverflow. I think you should generally know / understand what you are doing because you have to take everything it says with a grain of salt. It’s important to understand that these assistants can’t admit that they don’t know something and come up with random generated bullshit instead so you can’t fully trust their answers.

    So you still need to understand the basics of software development and potential issues otherwise it’s just negligence.

    On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

    • FuglyDuck@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      23 hours ago

      On a general note: IQ means nothing. I mean a lot of IQ tests use pattern recognition tasks that can be helpful but still, having a high IQ says nothing about you ability as developer

      to put this another way… expertise is superior to intelligence. Unfortunately we have this habit of conflating the two. intelligent people some times do some incredibly stupid things because they lack the experience to understand why something is stupid.

      Being a skilled doctor or surgeon doesn’t make you skilled at governance. two different skillsets.

  • Dr_Nik@lemmy.world
    link
    fedilink
    arrow-up
    6
    ·
    19 hours ago

    People who vibe code are not using free LLMs, they are using custom AI code generation systems they pay subscriptions for. I don’t know which ones work best but I do have a close friend who runs a software company and he just bought subscriptions for all his employees to some system I’ve never heard of because the code it generated drastically sped up their development time.

  • OhNoMoreLemmy@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    23 hours ago

    You absolutely can’t use LLMs for anything big unless you learn to code.

    Think of an LLM as a particularly shit builder. You give them a small job and maybe 70% of the time they’ll give you something that works. But it’s often not up to spec, so even if it kinda works you’ll have to tell them to correct it or fix it yourself.

    The bigger the job is and the more complex the more ways they have to fuck it up. This means in order to use them, you have to break the problem down into small sub tasks, and check that the code is good enough for each one.

    Can they be useful? Sometimes yes, it’s quicker to have an AI write code than for you to do it yourself, and if you want something very standard it will probably get it right or almost right.

    But you can’t just say ‘write me an app’ and expect it to be useable.

  • Mountaineer@aussie.zone
    link
    fedilink
    English
    arrow-up
    11
    ·
    1 day ago

    If you “vibe code” your way through trial and error to an app, it may work.
    But if you don’t understand what it’s doing, why it’s doing it and how it’s doing it?
    Then you can’t (easily) maintain it.
    If you can’t fix bugs or add features, you don’t have a saleable product - you have a proof of concept.

    AI tools are useful, but letting the tool do all the driving is asking for the metaphorical car to crash.