• 2 Posts
  • 54 Comments
Joined 9 months ago
cake
Cake day: February 4th, 2024

help-circle


  • You can install Plex on your mobile device and toggle the “share media from this device” setting. Otherwise, a steam deck would have everything an RPI has plus a GPU and a touch screen. Since there are two radios (2 and 5Ghz) on the device, you should be able to set it up as a bridge device, but I’ve not tried this personally.






  • That’s not remotely a reliable source. I’m no fan of European colonies in the middle east either, but any source that uses the word “Jewry” in the first sentence raises some eyebrows. This article also dances around the persistent anti-Semitic trope that a global Jewish conspiracy exists and does little to mention the (mostly) leftist Jewish Diaspora or the enthusiastic Yiddish speakers in the red army. It seems to collapse all Jewish people into the category of Zionist even while acknowledging internally that there many languages, cultures, and politics involved.

    I am also fully aware of how horrible the nakba was, so I’m not apologizing for the atrocities committed by the state of Israel then or now.

    This source also provides 0 references and the author is someone who calls themselves “Comrade Katsfoter”.

    Personally, I like my history sources to be vetted by peers and published in journals or books written by authors who are vetted by their peers via similar processes. You can and should do better than this.






  • Yeah. I’m thinking more along the lines of research and open models than anything to do with OpenAI. Fair use, above all else, generally requires that the derivative work not threaten the economic viability of the original and that’s categorically untrue of ChatGPT/Copilot which are marketed and sold as products meant to replace human workers.

    The clean room development analogy is definitely an analogy I can get behind, but raises further questions since LLMs are multi stage. Technically, only the tokenization stage will “see” the source code, which is a bit like a “clean room” from the perspective of subsequent stages. When does something stop being just a list of technical requirements and veer into infringement? I’m not sure that line is so clear.

    I don’t think the generative copyright thing is so straightforward since the model requires a human agent to generate the input even if the output is deterministic. I know, for example, Microsoft’s Image Generator says that the images fall under creative Commons, which is distinct from public domain given that some rights are withheld. Maybe that won’t hold up in court forever, but Microsoft’s lawyers seem to think it’s a bit more nuanced than “this output can’t be copyrighted”. If it’s not subject to copyright, then what product are they selling? Maybe the court agrees that LLMs and monkeys are the same, but I’m skeptical that that will happen considering how much money these tech companies have poured into it and how much the United States seems to bend over backwards to accommodate tech monopolies and their human rights violations.

    Again, I think it’s clear that commerical entities using their market position to eliminate the need for artists and writers is clearly against the spirit of copyright and intellectual property, but I also think there are genuinely interesting questions when it comes to models that are themselves open source or non-commercial.


  • For example, if I ask it to produce python code for addition, which GPL’d library is it drawing from?

    I think it’s clear that the fair use doctrine no longer applies when OpenAI turns it into a commercial code assistant, but then it gets a bit trickier when used for research or education purposes, right?

    I’m not trying to be obtuse-- I’m an AI researcher who is highly skeptical of AI. I just think the imperfect compression that neural networks use to “store” data is a bit less clear than copy/pasting code wholesale.

    would you agree that somebody reading source code and then reimplenting it (assuming no reverse engineering or proprietary source code) would not violate the GPL?

    If so, then the argument that these models infringe on right holders seems to hinge on the verbatim argument that their exact work was used without attribution/license requirements. This surely happens sometimes, but is not, in general, a thing these models are capable of since they’re using loss-y compression to “learn” the model parameters. As an additional point, it would be straightforward to then comply with DMCA requests using any number of published “forced forgetting” methods.

    Then, that raises a further question.

    If I as an academic researcher wanted to make a model that writes code using GPL’d training data, would I be in compliance if I listed the training data and licensed my resulting model under the GPL?

    I work for a university and hate big tech as much as anyone on Lemmy. I am just not entirely sure GPL makes sense here. GPL 3 was written because GPL 2 had loopholes that Microsoft exploited and I suspect their lawyers are pretty informed on the topic.


  • I hate big tech too, but I’m not really sure how the GPL or MIT licenses (for example) would apply. LLMs don’t really memorize stuff like a database would and there are certain (academic/research) domains that would almost certainly fall under fair use. LLMs aren’t really capable of storing the entire training set, though I admit there are almost certainly edge cases where stuff is taken verbatim.

    I’m not advocating for OpenAI by any means, but I’m genuinely skeptical that most copyleft licenses have any stake in this. There’s no static linking or source code distribution happening. Many basic algorithms don’t follow under copyright, and, in practice, stack overflow code is copy/pasted all the time without that being released under any special license.

    If your code is on GitHub, it really doesn’t matter what license you provide in the repository – you’ve already agreed to allowing any user to “fork” it for any reason whatsoever.




  • and my point was explaining that that work has likely been done because the paper I linked was 20 years old and they talk about the deep connection between “similarity” and “compresses well”. I bet if you read the paper, you’d see exactly why I chose to share it-- particularly the equations that define NID and NCD.

    The difference between “seeing how well similar images compress” and figuring out “which of these images are similar” is the quantized, classficiation step which is trivial compared to doing the distance comparison across all samples with all other samples. My point was that this distance measure (using compressors to measure similarity) has been published for at least 20 years and that you should probably google “normalized compression distance” before spending any time implementing stuff, since it’s very much been done before.


  • I think there’s probably a difference between an intro to computer science course and the PhD level papers that discuss the ability of machines to learn and decide, but my experience in this is limited to my PhD in the topic.

    And, no, textbooks are often not peer reviewed in the same way and generally written by graduate students. They have mistakes in them all the time. Or grand statements taken out of context. Or are simplified explanations because introducing the nuances of PAC-learnability to somebody who doesn’t understand a “for” loop is probably not very productive.

    I came here to share some interesting material from my PhD research topic and you’re calling me an asshole. It sounds like you did not have a wonderful day and I’m sorry for that.

    Did you try learning about how computers learn things and make decisions? It’s pretty neat


  • You seem very upset, so I hate to inform you that neither one of those are peer reviewed sources and that they are simplifying things.

    “Learning” is definitely something a machine can do and then they can use that experience to coordinate actions based on data that is inaccesible to the programmer. If that’s not “making a decision”, then we aren’t speaking the same language. Call it what you want and argue with the entire published field or AI, I guess. That’s certainly an option, but generally I find it useful for words to mean things without getting too pedantic.