
hotaru251 this administration hasnt cared about ai regulation. just a personal dislike of what others did is why it got rid of stuff in place. Reply
timsSOFTWARE I wonder if the Trump administration is fully aware that the aura of "Mythos" is mostly a myth, but is trying to get back at Anthropic for non-compliance with the military deal – it seems clear to me that the "Legend of Mythos" was hoped to carry Anthropic through their IPO. But really, as long as AI is still based on static models that could be validated in this method, it never be able to live up to the hype. The next big step for AI would have to involve self-evolving/updating models – more akin to how human neural networks are updated every night when we sleep. Reply
American2021 That way the government can see which ones are the best to surveil us with "automated analysis" (e.g. predictive tracking, facial recognition, backdoor methods, etc.) rubber stamped by often misused legislation like FISA which they misused and then simply sent the department heads up to lie to Congress about it afterwards. It's a lovely world. Reply
Why_Me chaos215bar2 said: So, presumably, what they really want is to punish Anthropic for standing up to the administration and drawing a clear line where their models may and may not be used for military purposes. Anthropic is a trash company. Reply
bit_user American2021 said: That way the government can see which ones are the best to surveil us with "automated analysis" No, that interpretation doesn't make sense. The models used for those purposes won't be the ones made available to the general public, which these are. What worries me about this is if it turns into the same thing we saw with DeepSeek models, where they refuse to answer questions about certain sensitive subjects like the 1989 Tiananmen Square crackdown. The administration could block models that don't agree with their worldview and could use this as a way to rewrite history. Because, too many people will apparently believe whatever AI tells them. So, if it says a certain person won a certain election or attempts to rewrite the story of slavery in the US, then that's effectively the new version of history. usertests said: Booo. Safety is for suckers. Yeah, I mean who wouldn't want to be subject to a terrorist attack with a chemical or biological weapon? That's what makes this one tricky. I think the government has a legit interest in making sure that stuff isn't too accessible. So, it's hard for me to say that they're in the wrong to want some sort of safety hurdles that models must clear. I guess the solution that's usually employed for sensitive security matters is for Congress to have full visibility into the regulatory process. Reply
bit_user timsSOFTWARE said: But really, as long as AI is still based on static models that could be validated in this method, it never be able to live up to the hype. The next big step for AI would have to involve self-evolving/updating models – more akin to how human neural networks are updated every night when we sleep. They have an ability to adapt via long context windows. That's how people are able to build "relationships" with a chatbot. Everything it remembers about their interactions is just the stuff that fits in its context window. Actual training is massively resource-intensive. With current technology, it wouldn't be feasible to do adaptation via individualized training. Reply
timsSOFTWARE bit_user said: They have an ability to adapt via long context windows. That's how people are able to build "relationships" with a chatbot. Everything it remembers about their interactions is just the stuff that fits in its context window. Actual training is massively resource-intensive. With current technology, it wouldn't be feasible to do adaptation via individualized training. Any context window that isn't unlimited fills up – it's just a matter of time. And as AI move beyond text and into other forms of sensory input, the equivalent of a 1M token window won't last very long at all when they are getting raw visual and audio data as sensory input, etc. In humans, the hippocampus is the equivalent of the context window. During each waking session, it accumulates episodic memories. Then when you sleep, the contents of the hippocampus – or at least the gist of it – gets trained into the cortex, and then the hippocampus is reset, so it can store new memories the next day. This "model fine-tuning" is the primary reason why we have to sleep. The problem is, we don't understand sleep very well – because we're not conscious when it happens. So it's the hardest part of the functioning of the human brain to try to replicate. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/trump-administration-considers-mandatory-pre-release-vetting-of-ai-models#main
- https://www.tomshardware.com
- Rethinking AI TCO: Why Cost per Token Is the Only Metric That Matters
- Enthusiast recreates 43-year-old Apple Lisa with FPGA board — first commercial computer with a GUI faithfully cloned with modernized machine
- This must-have miniature Macintosh retro dock gives your M4 Mac Mini a 1980s makeover — equipped with a 5-inch HD display and M.2 NVMe SSD slot
- Palantir co-founder Peter Thiel backs $140M wave-powered AI data center startup — Panthalassa aims to run offshore compute nodes using ocean energy
- Intel's P-core Core 9 273PQE 'Bartlett Lake' CPU beats 14900K by up to 9% in gaming tests — embedded-only chip is unofficially Intel's fastest gaming CPU (at 72
Informational only. No financial advice. Do your own research.