Exactly. Im not saying its not impressive or even not useful, but one should understand the limitation. For example you can’t reason with an llm in a sense that you could convince it of your reasoning. It will only respond how most people in the used dataset would have responded (obiously simplified)
You repeat your point but there already was agreement that this is how ai is now.
I fear you may have glanced over the second part where he states that once we simulated other parts of the brain things start to look different very quickly.
There do seem to be 2 kind of opinions on ai.
those that look at ai in the present compared to a present day human. This seems to be the majority of people overall
those that look at ai like a statistic, where it was in the past, what improved it and project within reason how it will start to look soon enough. This is the majority of people that work in the ai industry.
For me a present day is simply practice for what is yet to come. Because if we dont nuke ourselves back to the stone age. Something, currently undefinable, is coming.
What i fear is AI being used with malicious intent. Corporations that use it for collecting data for example. Or governments just putting everyone in jail that they are told by an ai
I’d expect governments to use it to craft public relation strategies. An extension of what they do now by hiring the smartest sociopaths on the planet. Not sure if this would work but I think so. Basically you train an AI on previous messaging and results from polls or voting. And then you train it to suggest strategies to maximize for support for X. A kind of dumbification of the masses. Of course it’s only going to get shittier from there on out.
I didn’t, I just focused on how it is today.
I think it can become very big and threatening but also helpful, but that’s just pure speculation at this point :)
Exactly. Im not saying its not impressive or even not useful, but one should understand the limitation. For example you can’t reason with an llm in a sense that you could convince it of your reasoning. It will only respond how most people in the used dataset would have responded (obiously simplified)
You repeat your point but there already was agreement that this is how ai is now.
I fear you may have glanced over the second part where he states that once we simulated other parts of the brain things start to look different very quickly.
There do seem to be 2 kind of opinions on ai.
those that look at ai in the present compared to a present day human. This seems to be the majority of people overall
those that look at ai like a statistic, where it was in the past, what improved it and project within reason how it will start to look soon enough. This is the majority of people that work in the ai industry.
For me a present day is simply practice for what is yet to come. Because if we dont nuke ourselves back to the stone age. Something, currently undefinable, is coming.
What i fear is AI being used with malicious intent. Corporations that use it for collecting data for example. Or governments just putting everyone in jail that they are told by an ai
I’d expect governments to use it to craft public relation strategies. An extension of what they do now by hiring the smartest sociopaths on the planet. Not sure if this would work but I think so. Basically you train an AI on previous messaging and results from polls or voting. And then you train it to suggest strategies to maximize for support for X. A kind of dumbification of the masses. Of course it’s only going to get shittier from there on out.
I didn’t, I just focused on how it is today. I think it can become very big and threatening but also helpful, but that’s just pure speculation at this point :)