If AI is only a “parrot” as you say, then why should there be worries about extinction from AI?
You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?
AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.
Yeah, no, that is not what the article says. AlphaChip is better at component/module placement in terms of connection length between them.
Not to say that that isn’t cool. But it is not recursive. That would imply that the chip with shorter connection length improves the models performance significantly, which they do not claim at all. Because it would quickly reach diminishing returns.
There is a thousand things that go into making chips. Many will benefit from the automatic optimization of such algorithms. But this doesn’t suddenly give you a new manufacturing node or anything comparable out of thin air. Just a marginal improvement on existing design.