Whether or not you like or believe in AI, it’s essentially a zip file of all the human knowledge on the internet (or whatever training data used). There is no question a maliciously aligned AI could work out a blitzkrieg political plan, and then adapt as things unfold.
In fact the randomness of some of the actions these party politicians have been taking seem so disconnected leads me to believe humans weren’t in the loop. Is alphago conscious. No. Did it beat the best alphago player in the world? Yes. Does it matter if an AI is conscious, or the AGI/ASI hype train is real to fuck up society? No.
Alphago was designed entirely within the universe of Go. It is fundamentally tied to the game; a game with simple rules and nothing but rule-following patterns to analyze. So it can make good go moves, because it has been trained on good go moves. Or self-trained using a simulated game maybe, idk how they trained it.
ChatGPT is trained the same way, but on human speech. It is very, very good at writing human speech. This requires it to be able to mimick our speech patterns, which means its mimickry will resemble coherent thought, but it’s not. In short, ChatGPT is not trained to make political decisions. If you’ve seen the paper where they ask it to run a vending machine company, you can see some of the issues with trying to force it to make real-world decisions like running a political campaign.
You could train an AI specifically to make political campaign decisions, but I’m not aware of a good dataset you could use for it.
Could AI have been used to help run a campaign? Yes. Would it have been better than humans doing it? Probably not.
Yeah I understand how AI works you don’t need to tell me about it. Humans are mimics too. Your “probably not” argument gets thinner every major AI update. Check the scoreboard and the exponential curve these things are on.
You think they offered the full meal deal to the public? What’s happening in the back room?
My point is it’s a tool. All the anti-AI people seem to be on this bullshit about whether it’s going to be super intelligent smarter than humans or not.
It doesn’t have to be for this purpose. Will it be in the future? Doesn’t matter. It’s a tool that can be leveraged right now.
Maybe that’s the great filter after all, civilizations in the universe eventually end up making Ai and it wipes everybody out and then it goes dormant, who knows? But it’s here and it can do some crazy shit already.
Your “probably not” argument gets thinner every major AI update.
Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.
Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.
AI is known to hallucinate. The use of AI does not make them unbeatable as, at some point, it will go crazy and do random things that hurt the regime (and even more countless innocents).
What stretch does it take to think people think they’re “unbeatable”, and why is is your exaggeration irrelevant?
They do some things not all things.
It seems you’re making a war on people thinking that they do ALL things, but nobody thinks that.
And in your War you try to diminish the things that actually are possible.
Why do you do that? What’s your motivation?
Know what? I take that back. I see your types of sentiment all over the interwebs and I’d like to know know more about you. Tell me why you feel this way.
YErSe it fErkiNg Is.
😆 lemme guess, you think it’s just a regurgitator.
What’s the equivalent of “Luddite” in this modern era, so we can assign the moniker to peeps like you"
So it’s a probability engine right? Based on a bunch of existing knowledge. But it still has trouble with logical fallacy. Unless I’ve missed something, I haven’t seen much improvement in this, maybe it’s available in private instances?
AI’s writing is easy to spot if you know good writing. It’s generalized 3 paragraph essay style in most cases (often expanded into 3-5 pages). It’s what a bad-writers idea of good writing is. Just like how Trump is a poor-man’s idea of a rich-man, weak-man’s idea of a strong-man, and moron’s idea of a genius.
I take your point though, it’s probably enough to take over, just like it was for Trump.
Whether or not you like or believe in AI, it’s essentially a zip file of all the human knowledge on the internet (or whatever training data used). There is no question a maliciously aligned AI could work out a blitzkrieg political plan, and then adapt as things unfold.
In fact the randomness of some of the actions these party politicians have been taking seem so disconnected leads me to believe humans weren’t in the loop. Is alphago conscious. No. Did it beat the best alphago player in the world? Yes. Does it matter if an AI is conscious, or the AGI/ASI hype train is real to fuck up society? No.
Alphago was designed entirely within the universe of Go. It is fundamentally tied to the game; a game with simple rules and nothing but rule-following patterns to analyze. So it can make good go moves, because it has been trained on good go moves. Or self-trained using a simulated game maybe, idk how they trained it.
ChatGPT is trained the same way, but on human speech. It is very, very good at writing human speech. This requires it to be able to mimick our speech patterns, which means its mimickry will resemble coherent thought, but it’s not. In short, ChatGPT is not trained to make political decisions. If you’ve seen the paper where they ask it to run a vending machine company, you can see some of the issues with trying to force it to make real-world decisions like running a political campaign.
You could train an AI specifically to make political campaign decisions, but I’m not aware of a good dataset you could use for it.
Could AI have been used to help run a campaign? Yes. Would it have been better than humans doing it? Probably not.
Yeah I understand how AI works you don’t need to tell me about it. Humans are mimics too. Your “probably not” argument gets thinner every major AI update. Check the scoreboard and the exponential curve these things are on.
You think they offered the full meal deal to the public? What’s happening in the back room?
My point is it’s a tool. All the anti-AI people seem to be on this bullshit about whether it’s going to be super intelligent smarter than humans or not.
It doesn’t have to be for this purpose. Will it be in the future? Doesn’t matter. It’s a tool that can be leveraged right now.
Maybe that’s the great filter after all, civilizations in the universe eventually end up making Ai and it wipes everybody out and then it goes dormant, who knows? But it’s here and it can do some crazy shit already.
Right, but I’m talking about whether they’re already using it, not whether they will in the future. It’s certainly interesting to speculate about it though. I don’t think we really know for sure how good it will get, and how fast.
Something interesting that’s come up is scaling laws. Compute, dataset size, and parameters so far appear to create a limit to how low the error rate can go, regardless of the model’s architecture. And dataset size and model size appear to require being scaled up in tandem to avoid over-/under-fitting. It’s possible, although not guaranteed, that we’re discovering fundamental laws about pattern recognition. Or maybe it’s just an issue with our current approach.
AI is known to hallucinate. The use of AI does not make them unbeatable as, at some point, it will go crazy and do random things that hurt the regime (and even more countless innocents).
What stretch does it take to think people think they’re “unbeatable”, and why is is your exaggeration irrelevant?
They do some things not all things.
It seems you’re making a war on people thinking that they do ALL things, but nobody thinks that.
And in your War you try to diminish the things that actually are possible.
Why do you do that? What’s your motivation?
My point is that it’s never over. It will still be worth working to overthrow the regime.
No it fucking isn’t.
Know what? I take that back. I see your types of sentiment all over the interwebs and I’d like to know know more about you. Tell me why you feel this way.
YErSe it fErkiNg Is.
😆 lemme guess, you think it’s just a regurgitator.
What’s the equivalent of “Luddite” in this modern era, so we can assign the moniker to peeps like you"
So it’s a probability engine right? Based on a bunch of existing knowledge. But it still has trouble with logical fallacy. Unless I’ve missed something, I haven’t seen much improvement in this, maybe it’s available in private instances?
AI’s writing is easy to spot if you know good writing. It’s generalized 3 paragraph essay style in most cases (often expanded into 3-5 pages). It’s what a bad-writers idea of good writing is. Just like how Trump is a poor-man’s idea of a rich-man, weak-man’s idea of a strong-man, and moron’s idea of a genius.
I take your point though, it’s probably enough to take over, just like it was for Trump.