U.S. government will require technology companies to share large model training and safety data


AI regulations

The U.S. government will invoke the Defense Production Act to require technology companies to share information on the launch of large language model training and security data with the U.S. government. U.S. Commerce Secretary, Gina Raimondo said at an event on Friday

“We (the U.S. government) are using the Defense Production Act to conduct an investigation that requires companies to train new LLMs every time they launch share the situation with us from time to time, but also share safety data results for review.”

U.S Chip

The U.S. government issued an executive order in October last year, requiring the U.S. Department of Commerce to propose an AI management plan before the 28th of this month. The executive order initially stipulates that AI models with more than 1026 flops of computing power during training need to be reported to the US government. This standard is slightly higher than the computing power used to train GPT-4.

Gizchina News of the week


According to siliconANGEL, the US government’s use of the Defense Production Act to regulate AI safety is a rather unusual move. Although AI has military uses, the United States generally regulates the technology industry through general regulations rather than the Defense Production Act, a bill designed to ensure military supplies for the U.S. defence.

Implications

The mandate is expected to have far-reaching implications for technology companies. It will require them to be more transparent about their AI development processes and the safety measures in place. It will also enable the government to play a more active role in checking the safety of AI systems. This is particularly for those who pose a serious risk to national economic security or public health and safety.

On-device AI large model training

Conclusion

The U.S. government’s decision to require technology companies to share large model training and safety data is a significant step. It will help to ensure the safe, secure, and trustworthy development of AI systems. This move reflects a growing recognition of the need for enhanced transparency. It also reveals the oversight in the rapidly evolving field of artificial intelligence.

Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.

Source/VIA :
Previous Samsung aims to roll out Galaxy AI to 100 million devices this year
Next Instagram Is Testing Flipside: A Private Playground for Your Inner Circle