Senators probe OpenAI on safety and employment practices

Senators probe OpenAI on safety and employment practices


Five prominent Senate Democrats have sent a letter to OpenAI CEO Sam Altman, seeking clarity on the company’s safety and employment practices.

The letter – signed by Senators Brian Schatz, Ben Ray Luján, Peter Welch, Mark R. Warner, and Angus S. King, Jr. – comes in response to recent reports questioning OpenAI’s commitment to its stated goals of safe and responsible AI development.

The senators emphasise the importance of AI safety for national economic competitiveness and geopolitical standing. They note OpenAI’s partnerships with the US government and national security agencies to develop cybersecurity tools, underscoring the critical nature of secure AI systems.

“National and economic security are among the most important responsibilities of the United States Government, and unsecure or otherwise vulnerable AI systems are not acceptable,” the letter states.

Phemex

The lawmakers have requested detailed information on several key areas by 13 August 2024. These include:

OpenAI’s commitment to dedicating 20% of its computing resources to AI safety research.

The company’s stance on non-disparagement agreements for current and former employees.

Procedures for employees to raise cybersecurity and safety concerns.

Security protocols to prevent theft of AI models, research, or intellectual property.

OpenAI’s adherence to its own Supplier Code of Conduct regarding non-retaliation policies and whistleblower channels.

Plans for independent expert testing and assessment of OpenAI’s systems pre-release.

Commitment to making future foundation models available to US Government agencies for pre-deployment testing.

Post-release monitoring practices and learnings from deployed models.

Plans for public release of retrospective impact assessments on deployed models.

Documentation on meeting voluntary safety and security commitments to the Biden-Harris administration.

The senators’ inquiry touches on recent controversies surrounding OpenAI, including reports of internal disputes over safety practices and alleged cybersecurity breaches. They specifically ask whether OpenAI will “commit to removing any other provisions from employment agreements that could be used to penalise employees who publicly raise concerns about company practices.”

This congressional scrutiny comes at a time of increasing debate over AI regulation and safety measures. The letter references the voluntary commitments made by leading AI companies to the White House last year, framing them as “an important step towards building this trust” in AI safety and security.

Kamala Harris may be the next US president following the election later this year. At the AI Safety Summit in the UK last year, Harris said: “Let us be clear, there are additional threats that also demand our action. Threats that are currently causing harm, and which to many people also feel existential… when people around the world cannot discern fact from fiction because of a flood of AI-enabled myths and disinformation.”

Chelsea Alves, a consultant with UNMiss, commented: “Kamala Harris’ approach to AI and big tech regulation is both timely and critical as she steps into the presidential race. Her policies could set new standards for how we navigate the complexities of modern technology and individual privacy.”

The response from OpenAI to these inquiries could have significant implications for the future of AI governance and the relationship between tech companies and government oversight bodies.

(Photo by Darren Halstead)

See also: OpenResearch reveals potential impacts of universal basic income

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: ai, artificial intelligence, democrats, ethics, government, legal, Politics, probe, regulation, safety, senate, Society, usa





Source link

[wp-stealth-ads rows="2" mobile-rows="3"]

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest

#GlobalNewsIt
Paxful
#GlobalNewsIt
Senators probe OpenAI on safety and employment practices
Phemex
Bybit
Augment Code Released Augment SWE-bench Verified Agent: An Open-Source Agent Combining Claude Sonnet 3.7 and OpenAI O1 to Excel in Complex Software Engineering Tasks
OpenAI just made ChatGPT Plus free for millions of college students — and it's a brilliant competitive move against Anthropic
Ant Group uses domestic chips to train AI models and cut costs
Snowflake Proposes ExCoT: A Novel AI Framework that Iteratively Optimizes Open-Source LLMs by Combining CoT Reasoning with off-Policy and on-Policy DPO, Relying Solely on Execution Accuracy as Feedback
HTC's Viverse Creator Program opens globally for 3D artists
Photo of a sparkler as a report from the Tony Blair Institute calling on the UK government to lead in navigating the complex intersection of arts and AI by adapting copyright laws spark backlash, concerns, and criticism about the impact of generative artificial intelligence models on artists, writers, and other human creativity industries.
bitcoin
ethereum
bnb
xrp
cardano
solana
dogecoin
polkadot
shiba-inu
dai
Bitcoin
Dogecoin
Tests $2,500 Support Level Amid International Trade Tensions
Cango to Offload Chinese Assets for $352M, Eyes Bitcoin Mining Growth 
Crypto market bottom likely by June despite tariff fears: Finance Redefined
Bitcoin
Dogecoin
Tests $2,500 Support Level Amid International Trade Tensions
Cango to Offload Chinese Assets for $352M, Eyes Bitcoin Mining Growth 
bitcoin
ethereum
tether
xrp
bnb
solana
usd-coin
dogecoin
cardano
tron
bitcoin
ethereum
tether
xrp
bnb
solana
usd-coin
dogecoin
cardano
tron