• 0 Posts
  • 132 Comments
Joined 2 years ago
cake
Cake day: July 31st, 2023

help-circle
  • Clearly the author doesn’t understand how capitalism works. If Apple can pick you up by the neck, turn you upside down, and shake whatever extra money it can from you then it absolutely will do so.

    The problem is that one indie developer doesn’t have any power over Apple… so they can go fuck themselves. The developer is granted the opportunity to grovel at the feet of their betters (richers) and pray that they are allowed to keep enough of their own crop to survive the winter. If they don’t survive… then some other dev will probably jump at the chance to take part in the “free market” and demonstrate their worth.





  • That was an example of a situation where time zones make sense. Any time it is important where the sun is in the sky, the time that it occurs will differ depending on where you are in the world. When is lunch break? When do backups run? When can you see the eclipse? If we weren’t in an interconnected world, it wouldn’t matter much but we need some convention to communicate information that is dependent on where the sun is, as that very often dictates human activity.

    It seems like a universal time makes sense but I can’t think of a way to get around the fact that activity will vary according to timezones anyway.


  • And there are no other external factors that could possibly influence their compensation besides their objective “worth” to the hiring organization?

    Edit: To clarify, might personal bias from the employer lead to a higher compensation? If two CEOs are interviewed and one went to the same college as several members of the board, or if several members of the board know one personally, but the known CEO isn’t as accomplished… is it possible that the CEO benefitting from bias is going be hired? Will the benefitting CEO receive a lower compensation, higher compensation, or the same compensation?

    Is it possible for a CEO to lie about their ability and get hired under false pretenses? Is it possible for a CEO to be hired for political or “public image” reasons rather than talent/productivity reasons? Are these reflected in their compensation?


  • I think the word “learning”, and even “training”, is an approximation from a human perspective. MLs “learn” by adjusting parameters when processing data. At least as far as I know, the base algorithm and hyperparameters for the model are set in stone.

    The base algorithm for “living” things is basically only limited by chemistry/physics and evolution. I doubt anyone could create an algorithm that advanced any time soon. We don’t even understand the brain or physics at the quantum level that well. Hell, we are using ML to create new molecules because we don’t understand it well.




  • theparadox@lemmy.worldtoAsklemmy@lemmy.mlWhat hills are you dying on?
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    23 days ago

    We should stop using time zones

    Check this out. I’m a business with at least one office in every US state. You want to know when my New York office opens so you can come by. Instead of seeing “Offices are open 9 AM to 5 PM” You now need to check every office… by state… by city? Time zones would be helpful even if we all used GMT, so that you could easily determine which time zone a business is in to set a reasonable time to be open.

    DST can fuck off though.



  • I think you’re either being a little dismissive of the potential complexity of the “thinking” capability of LLMs or at least a little generous if not mystical in your imagination of what the purely physical electrical signals in our heads are actually doing to learn how to interpret all these little shapes we see on screens.

    I don’t think I’m doing either of those things. I respect the scale and speed of the models and I am well aware that I’m little more than a machine made of meat.

    Babies start out mimicking. The thing is, they learn.

    Humans learn so much more before they start communicating. They start learning reason, logic, etc as they develop their vocabulary.

    The difference is that, as I understand it, these models are often “trained” on very, very large sets of data. They have built a massive network of the way words are used in communication - likely built from more texts than a human could process in several lifetimes. They come out the gate with an enormous vocabulary and understanding of how to mimic, replicate it’s use. If they had been trained on just as much data, but data unrelated to communication, would you still think it capable of reasoning without the ability to “sound” human? They have the “vocabulary” and references to mimic a deep understanding but because we lack the ability to understand the final algorithm it seems like an enormous leap to presume actual reasoning is taking place.

    Frankly, I see no reason for models like LLMs at this stage. I’m fine putting the breaks on this shit - even if we disagree on the reasons why. ML can and has been employed to achieve far more practical goals. Use it alongside humans for a while until it is verifiably more reliable at some task - recognizing cancer in imaging or generating molecules likely of achieving a desired goal. LLMs are just a lazy shortcut to look impressive and sell investors on the technology.

    Maybe I am failing to see reality - maybe I don’t understand the latest “AI” well enough to give my two cents. That’s fine. I just think it’s being hyped because these companies desperately need VC money to stay afloat.

    It works because humans have an insatiable desire to see agency everywhere they look. Spirits, monsters, ghosts, gods, and now “AI.”


  • Yes, both systems - the human brain and an LLM - assimilate and organize human written languages in order to use it for communication. An LLM is very little else beyond this. It is then given rules (using those written languages) and then designed to create more related words when given input. I just don’t find it convincing that an ML algorithm designed explicitly to mimic human written communication in response to given input “understands” anything. No matter *how convincingly" an algorithm might reproduce a human voice - perfectly matching intonation and inflexion when given text to read - if I knew it was an algorithm designed to do it as convincingly as possible I wouldn’t say it was capable of the feeling it is able to express.

    The only thing in favor of sentience is that the ML algorithms modify themselves and end up being a black box - so complex with no way to represent them that they are impossible for humans to comprehend. Could it somehow have achieved sentience? Technically, yes, because we don’t understand how they work. We are just meat machines, after all.



  • Calling someone “blue MAGA” is the equivalent of saying “no you!”

    However, it’s time to stop pretending like some small group of “MAGA” conservatives have hijacked the party and taken things too far. The monied interests backing Trump are the same as have been backing Republicans for decades. The Federalist Society, the Heritage Foundation, etc. Mitch McConnell has been working to fill the federal courts with Federalist picks for a long time. Picking or just outright manufacturing court cases that would set new precedents. Hell, even those thinktanks are just recent iterations of the same interest’s attempts to shape the government as they see fit. Trump is just a nepo baby turned grifter who got lucky because his grift was actually effective at attracting and controlling the loudest segment of the Republican base.

    Trump just transparently said “As long as I get filthy rich, get to be king, and you keep [metaphorically] sucking my dick, I’ll keep my followers in line and use my position to put your people in power so they can implement your ‘Project 25’ or whatever.” Republicans mostly objected to him because he lacked subtlety and was transparently greedy and petty. He ignored the game of slow, subtle changes and manipulation through “decorum” that Republicans had become experts in. Unfortunately for us, that worked wonders on a subset of the population

    The people who helped those Republican politicians keep getting elected and basically wrote their proposed laws noticed Trump was popular. When it became apparent that Trump’s followers were loyal, the money jumped at the chance to fast track their vision and backed him completely. They helped tweak and hone Trump’s message to amplify his grifter magic. That plus some changes to election laws around the country, gerrymandering, and likely other more covert, extralegal vote manipulation got him back in power.





  • My interpretation of this might be different, but I agree wholeheartedly with my interpretation.

    Being morally just doesn’t just mean “not causing harm” directly. It means striving to not cause harm both directly and indirectly. As someone who lives in the USA, our entire society is built off of exploitation. The less expensive something is, the more heavy the exploitation likely is. The cheapest manufacturing is done in countries where labor is exploited or even enslaved, where the manufacturing process can pollute and poison the area with little consequence (to the manufacturer), and where the powerful can force deals on the government to let them extract valuable resources and pay a fraction of its value - depriving the locals and nation prosperity. Even when buying US food products, the food industry mostly relies on extremely poor conditions for the animals it keeps, taking advantage of farmers it buys from or employs, and may even employ migrant children for dangerous slaughterhouse labor.

    Avoiding these kinds of practices throughout most supply chains is sometimes impossible and usually more expensive the more thoroughly you manage to avoid the practices. Even then someone has to check in and constantly verify that the practices are legitimately avoided and not just greenwashing or fraudulent.

    It’s really quite depressing.