
hotaru251 this administration hasnt cared about ai regulation. just a personal dislike of what others did is why it got rid of stuff in place. Reply
timsSOFTWARE I wonder if the Trump administration is fully aware that the aura of "Mythos" is mostly a myth, but is trying to get back at Anthropic for non-compliance with the military deal – it seems clear to me that the "Legend of Mythos" was hoped to carry Anthropic through their IPO. But really, as long as AI is still based on static models that could be validated in this method, it never be able to live up to the hype. The next big step for AI would have to involve self-evolving/updating models – more akin to how human neural networks are updated every night when we sleep. Reply
American2021 That way the government can see which ones are the best to surveil us with "automated analysis" (e.g. predictive tracking, facial recognition, backdoor methods, etc.) rubber stamped by often misused legislation like FISA which they misused and then simply sent the department heads up to lie to Congress about it afterwards. It's a lovely world. Reply
Why_Me chaos215bar2 said: So, presumably, what they really want is to punish Anthropic for standing up to the administration and drawing a clear line where their models may and may not be used for military purposes. Anthropic is a trash company. Reply
bit_user American2021 said: That way the government can see which ones are the best to surveil us with "automated analysis" No, that interpretation doesn't make sense. The models used for those purposes won't be the ones made available to the general public, which these are. What worries me about this is if it turns into the same thing we saw with DeepSeek models, where they refuse to answer questions about certain sensitive subjects like the 1989 Tiananmen Square crackdown. The administration could block models that don't agree with their worldview and could use this as a way to rewrite history. Because, too many people will apparently believe whatever AI tells them. So, if it says a certain person won a certain election or attempts to rewrite the story of slavery in the US, then that's effectively the new version of history. usertests said: Booo. Safety is for suckers. Yeah, I mean who wouldn't want to be subject to a terrorist attack with a chemical or biological weapon? That's what makes this one tricky. I think the government has a legit interest in making sure that stuff isn't too accessible. So, it's hard for me to say that they're in the wrong to want some sort of safety hurdles that models must clear. I guess the solution that's usually employed for sensitive security matters is for Congress to have full visibility into the regulatory process. Reply
bit_user timsSOFTWARE said: But really, as long as AI is still based on static models that could be validated in this method, it never be able to live up to the hype. The next big step for AI would have to involve self-evolving/updating models – more akin to how human neural networks are updated every night when we sleep. They have an ability to adapt via long context windows. That's how people are able to build "relationships" with a chatbot. Everything it remembers about their interactions is just the stuff that fits in its context window. Actual training is massively resource-intensive. With current technology, it wouldn't be feasible to do adaptation via individualized training. Reply
timsSOFTWARE bit_user said: They have an ability to adapt via long context windows. That's how people are able to build "relationships" with a chatbot. Everything it remembers about their interactions is just the stuff that fits in its context window. Actual training is massively resource-intensive. With current technology, it wouldn't be feasible to do adaptation via individualized training. Any context window that isn't unlimited fills up – it's just a matter of time. And as AI move beyond text and into other forms of sensory input, the equivalent of a 1M token window won't last very long at all when they are getting raw visual and audio data as sensory input, etc. In humans, the hippocampus is the equivalent of the context window. During each waking session, it accumulates episodic memories. Then when you sleep, the contents of the hippocampus – or at least the gist of it – gets trained into the cortex, and then the hippocampus is reset, so it can store new memories the next day. This "model fine-tuning" is the primary reason why we have to sleep. The problem is, we don't understand sleep very well – because we're not conscious when it happens. So it's the hardest part of the functioning of the human brain to try to replicate. Babies sleep the most – up to 18 hours a day – because they have the most training to do, and at that stage, it doesn't make sense to try to synchronize with the sun on a daily-basis yet. Older humans reduce the amount of sleep they need – down to as little as 6 hours – because the system is designed to reduce training/become more rigid as you get older. The reason for the latter – not as a defect, but an intentional feature – is because "catastrophic forgetting" is not just an AI model thing – learning too much, too quickly, can result in forgetting in humans as well. It's why you probably don't remember being < 3 years old. Your network was being updated drastically, and loss of memory was a side-effect. If you've been successful enough to survive into middle/older age, that's taken as a sign by the system that you learned some valuable things that should be passed down rather than forgotten. Reply
Key considerations
- Investor positioning can change fast
- Volatility remains possible near catalysts
- Macro rates and liquidity can dominate flows
Reference reading
- https://www.tomshardware.com/tech-industry/artificial-intelligence/SPONSORED_LINK_URL
- https://www.tomshardware.com/tech-industry/artificial-intelligence/trump-administration-considers-mandatory-pre-release-vetting-of-ai-models#main
- https://www.tomshardware.com
- Inland QN450 1TB SSD Review: Maximum efficiency, minimum spend
- Steam Controller scalpers are asking for $300+ on eBay — Valve's $99 controller demands high price as pre-orders sell out almost immediately [Updated]
- CISA flags actively exploited ‘Copy Fail’ Linux kernel flaw enabling root takeover across major distros — unpatched systems may remain vulnerable to attack
- From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI
- Alienware 16 Area-51 review: OLED screen update
Informational only. No financial advice. Do your own research.