Zeno says hi
Cryptography nerd
Zeno says hi
Humans learn a lot through repetition, no reason to believe that LLMs wouldn’t benefit from reinforcement of higher quality information. Especially because seeing the same information in different contexts helps mapping the links between the different contexts and helps dispel incorrect assumptions. But like I said, the only viable method they have for this kind of emphasis at scale is incidental replication of more popular works in its samples. And when something is duplicated too much it overfits instead.
They need to fundamentally change big parts of how learning happens and how the algorithm learns to fix this conflict. In particular it will need a lot more “introspective” training stages to refine what it has learned, and pretty much nobody does anything even slightly similar on large models because they don’t know how, and it would be insanely expensive anyway.
Yes, but should big companies with business models designed to be exploitative be allowed to act hypocritically?
My problem isn’t with ML as such, or with learning over such large sets of works, etc, but these companies are designing their services specifically to push the people who’s works they rely on out of work.
The irony of overfitting is that both having numerous copies of common works is a problem AND removing the duplicates would be a problem. They need an understanding of what’s representative for language, etc, but the training algorithms can’t learn that on their own and it’s not feasible go have humans teach it that and also the training algorithm can’t effectively detect duplicates and “tune down” their influence to stop replicating them exactly. Also, trying to do that latter thing algorithmically will ALSO break things as it would break its understanding of stuff like standard legalese and boilerplate language, etc.
The current generation of generative ML doesn’t do what it says on the box, AND the companies running them deserve to get screwed over.
And yes I understand the risk of screwing up fair use, which is why my suggestion is not to hinder learning, but to require the companies to track copyright status of samples and inform ends users of licensing status when the system detects a sample is substantially replicated in the output. This will not hurt anybody training on public domain or fairly licensed works, nor hurt anybody who tracks authorship when crawling for samples, and will also not hurt anybody who has designed their ML system to be sufficiently transformative that it never replicates copyrighted samples. It just hurts exploitative companies.
Remember when media companies tried to sue switch manufacturers because their routers held copies of packets in RAM and argued they needed licensing for that?
https://www.eff.org/deeplinks/2006/06/yes-slashdotters-sira-really-bad
Training an AI can end up leaving copies of copyrightable segments of the originals, look up sample recover attacks. If it had worked as advertised then it would be transformative derivative works with fair use protection, but in reality it often doesn’t work that way
See also
Apple management has explicitly stated they do not want to support better compatibility between Android and iPhone, their response when asked what parents who buy cheap Androids for their kids should do it was to buy them iPhones. Many of the problems are very easy to fix on Apple’s side and keeping them problematic is intentional.
Wine/Proton on Linux occasionally beats Windows on the same hardware in gaming, because there’s inefficiencies in the original environment which isn’t getting replicated unnecessarily.
It’s not quite the same with CPU instruction translation, but the main efficiency gain from ARM is being designed to idle everything it can idle while this hasn’t been a design goal of x86 for ages. A substantial factor to efficiency is figuring out what you don’t have to do, and ARM is better suited for that.
It’s not that uncommon in specialty hardware with CPU instructions extensions for a different architecture made available specifically for translation. Some stuff can be quite efficiently translated on a normal CPU of a different architecture, some stuff needs hardware acceleration. I think Microsoft has done this on some Surface devices.
Some media companies called it piracy even if you’re doing it to get paid access content they aren’t offering in your country 🤷
The irony is that Apple has UWB direction finding in phones but didn’t put it in the AR headset where it would be infinitely more useful. They could even use UWB in controllers for motion tracking relative to the headset, and yet they just didn’t.
But they’re not at all designed for use as shared devices, not even proper local multiuser support (any devs who want that has to craft it all by themselves from scratch), so collaborative work or simultaneous display and interaction doesn’t work well with them. In fact it would be easier to just let a client see 3D stuff on an ipad with an AR app.
Apple Vision Pro is the best virtual sandbox headset.
Almost nobody needs the best virtual sandbox headset.
I like that the viture has dimming, lens adjustment, an optional android based neckband device, and miracast is neat, etc.
If all you want is a compact screen that’s pretty good, and I’m considering getting one, but I want to see some more stuff like integration with your other devices. I see they have remote desktop stuff for gaming, etc, but I’m thinking a bit deeper integration like using a phone app to relay notifications like a HUD, and I want a bit more spatial awareness (might need to rely on stuff like radio beacons for that like UWB). The navigation also seems to rely on either your phone, buttons on the neckband, or a paired 3rd party controller (no official wireless controller), you could make it a bit easier with something that’s maybe keychain sized?
Imagine if the headset could piggyback on your phone’s AR support + UWB direction finding to let your phone calculate where it is relative to the world, then relay it to the headset which calculate its offset to tell where IT is in the world, it would immediately make Google Maps Live View infinitely more immersive (and overlays don’t need to be perfect, just need to not drift by too many degrees). It would probably be annoying to have to keep scanning with your phone to keep the map accurate though 🤷
Authoritarians all of them
The point of such an early dev kit isn’t to commit in advance but get people to try out what works, then select what will be in the final product (and maybe releasing updated dev kits on the way). They’re would be a general plan, but this isn’t like a game console dev kit where almost all specs and major features are set in advance, so you’d expect devs to implement multiple variants of each software feature and see what they require of the hardware, how people use it, how popular they are, etc.
See: Twitter bots with paid verification
Viture looks neat, still missing some things I want to see but they seem to understand what this kind of device needs
Extremely high density pixels
Yeah, I’m pretty convinced we need to be able to make the headsets lighter, and put more compute in an accessory and have the headset just do low complexity stuff like low latency last-millisecond angle adjustments to frames as you move.
Exactly. Not promoting it as a dev kit was a major failure. This is the kind of product where you CAN’T do without external feedback, not everybody will use one in a clean office (or even one that stands still), not everybody has the same spatial awareness or motor skills, not supporting controllers locks out numerous people with limited hand movements, etc… As a dev kit it could’ve worked much better at getting the kind of feedback they need from devs working on useful AR stuff
What’s the point when herd immunity is necessary?