I can’t stop thinking about this piece from Gary Marcus I read a few days ago, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. It’s a fascinating read on the differences of connectionist vs. symbolic AI and the merging of the two into neurosymbolic AI from someone who understands the topic.
I recommend giving the whole thing a read, but this little nugget at the end is what caught my attention,
Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes?
Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.
AGI is still rather poorly defined, and taking cues from Ed Zitron (another favorite of mine), there will be a moving of goalposts. Scaling fast and hard to several gigglefucks of power and claiming you’ve achieved AGI is the next big maneuver. All of this largely just to treat AI as a blackhole for accountability; the super smart computer said we had to take your healthcare.
That was an interesting substack article. I’m not super deep into the AI stuff and never heard of Gary Marcus. I agree that they went to scaling LLMs first because 1 - it’s easier to scale vs tie in new ways of doing things and 2 - companies like Nvidia were in line to make a ton of money as crypto mining started to fall out of favor and real-time ray tracing wasn’t giving them as big of advantage as they hoped.
I can’t stop thinking about this piece from Gary Marcus I read a few days ago, How o3 and Grok 4 Accidentally Vindicated Neurosymbolic AI. It’s a fascinating read on the differences of connectionist vs. symbolic AI and the merging of the two into neurosymbolic AI from someone who understands the topic.
I recommend giving the whole thing a read, but this little nugget at the end is what caught my attention,
AGI is still rather poorly defined, and taking cues from Ed Zitron (another favorite of mine), there will be a moving of goalposts. Scaling fast and hard to several gigglefucks of power and claiming you’ve achieved AGI is the next big maneuver. All of this largely just to treat AI as a blackhole for accountability; the super smart computer said we had to take your healthcare.
For a minute there I read that as “AG1” and thought “man, are we getting those ads in Lemmy comments now”?
That was an interesting substack article. I’m not super deep into the AI stuff and never heard of Gary Marcus. I agree that they went to scaling LLMs first because 1 - it’s easier to scale vs tie in new ways of doing things and 2 - companies like Nvidia were in line to make a ton of money as crypto mining started to fall out of favor and real-time ray tracing wasn’t giving them as big of advantage as they hoped.
They are still making a ton of dough!