The students should get together and jack the machine away into their hacking club and do some reverse engineering, so that we get more information on how the data collection worked as opposed to just trusting the company’s statements. If a hacking group like the German Chaos Computer Club got behind this, they could release their findings while keeping the perpetrators anonymous. However, I’m pretty sure the machine is just a frontend to a server, which got shut down as soon as the students complained, with no GDPR-like checkout being available in the jurisdiction.
No only was a person behind the decision, a person was also behind the dissemination of the requirements, the implementation of the change, the design of the hardware, and all steps in between.
When you start tinkering with a machine learning model of any kind, you’re probably going to find some interesting edge cases the model can’t handle correctly. Maybe there’s a specific face that has an unexpected effect on the device. What if you could find a way to cheese a discount out of it or something?
Imagine a racist vending machine. The face recognition system think this customer is black with 81% confidence. Let’s increase the price of grape soda! Oh look, a 32 year old white woman (79% confidence). Better raise the price of diet coke!
In Japan they had some kind of facial recognition on vending machines selling cigarettes that would determine the age of the person in attempt to prevent kids from buying cigarettes. But it only worked for Japanese people.
Stupid racist vending machine wouldn’t sell me smokes!
Also I was in a diverse group of people and we were able to do some science. Fortunately we had a Japanese person in the group which allowed me to purchase the smokes. But yeah, it failed on everyone that wasn’t Japanese.
When you use a generated face with a mixture of white and black features, that’s when it gets interesting. Maybe you can even cause an integer overflow.
I don’t think they’re doing dynamic pricing on an individual basis, that would be too obvious. But checking the demographics of each location or individuals’ shopping habits, and potentially adjusting the prices or offerings? Definitely.
So, if you show it 100 faces from group A and 4 faces from group B, that could start gradually shifting the prices in a specific direction. If you keep going, you might be able to make it do something funny like charging 0.1 € for a Pepsi and 1000 € for a Coke or something like that. If the devs saw that coming, they might have set some limits so that the price can’s spiral totally out of control.
I am sure the profit margin is taken into account, so you won’t get an ultracheap Pepsi unless it expires soon. Similarly, I expect it to consider economic viability, so it won’t keep raising prices unless people are willing to pay them. Of course, you never know what the model actually does or what goals it follows (maximizing profit is a good guess, though), or how bad the coding is. The program might be very versatile and robust, or it may break when you show it a QR code - how can I know? Probably something in between.
The students should get together and jack the machine away into their hacking club and do some reverse engineering, so that we get more information on how the data collection worked as opposed to just trusting the company’s statements. If a hacking group like the German Chaos Computer Club got behind this, they could release their findings while keeping the perpetrators anonymous. However, I’m pretty sure the machine is just a frontend to a server, which got shut down as soon as the students complained, with no GDPR-like checkout being available in the jurisdiction.
Removed by mod
No only was a person behind the decision, a person was also behind the dissemination of the requirements, the implementation of the change, the design of the hardware, and all steps in between.
When you start tinkering with a machine learning model of any kind, you’re probably going to find some interesting edge cases the model can’t handle correctly. Maybe there’s a specific face that has an unexpected effect on the device. What if you could find a way to cheese a discount out of it or something?
Imagine a racist vending machine. The face recognition system think this customer is black with 81% confidence. Let’s increase the price of grape soda! Oh look, a 32 year old white woman (79% confidence). Better raise the price of diet coke!
In Japan they had some kind of facial recognition on vending machines selling cigarettes that would determine the age of the person in attempt to prevent kids from buying cigarettes. But it only worked for Japanese people.
Stupid racist vending machine wouldn’t sell me smokes!
shame. id like to send you a carton.
It’s cool, I quit years ago.
Also I was in a diverse group of people and we were able to do some science. Fortunately we had a Japanese person in the group which allowed me to purchase the smokes. But yeah, it failed on everyone that wasn’t Japanese.
When you use a generated face with a mixture of white and black features, that’s when it gets interesting. Maybe you can even cause an integer overflow.
Imagine AI bsod when seeing you
Yo mama so ugly…
Vending Machine Phreaking
I firmly believe that every system has exploits. The more complex the system, the harder it can be cheesed.
Just need to cycle thru 3 million QR codes in 1.7 seconds
I don’t think they’re doing dynamic pricing on an individual basis, that would be too obvious. But checking the demographics of each location or individuals’ shopping habits, and potentially adjusting the prices or offerings? Definitely.
So, if you show it 100 faces from group A and 4 faces from group B, that could start gradually shifting the prices in a specific direction. If you keep going, you might be able to make it do something funny like charging 0.1 € for a Pepsi and 1000 € for a Coke or something like that. If the devs saw that coming, they might have set some limits so that the price can’s spiral totally out of control.
I am sure the profit margin is taken into account, so you won’t get an ultracheap Pepsi unless it expires soon. Similarly, I expect it to consider economic viability, so it won’t keep raising prices unless people are willing to pay them. Of course, you never know what the model actually does or what goals it follows (maximizing profit is a good guess, though), or how bad the coding is. The program might be very versatile and robust, or it may break when you show it a QR code - how can I know? Probably something in between.
After that, set the thing on fire and throw it in the manufacturers office