Search
Want an SUV with Easy Access and Comfort for Seniors? Here’s How to Get It!

want an suv with easy access and comfort for seniors? here’s how to get it!...

February 1, 2025

3:23 am

Celebrate the Holidays in a New Hyundai Palisade

celebrate the holidays in a new hyundai palisade...

February 1, 2025

3:42 am

By Logan Brooks

OpenAI researcher resigns, citing fear over AI’s rapid development

February 1, 2025

03:44

OpenAI researcher resigns, citing fear over AI's rapid development

A resignation fueled by AI concerns

Steven Adler, a longtime researcher at OpenAI, has resigned from his position, citing growing fears about the rapid pace of artificial intelligence development.

In a post on X (formerly Twitter), Adler announced that he left OpenAI in mid-November after working on AI safety, dangerous capability evaluations, agent control, and AGI (Artificial General Intelligence) governance for four years.

“It was a wild ride with lots of chapters – dangerous capability evals, agent safety/control, AGI and online identity, etc. – and I’ll miss many parts of it,” Adler wrote.

Explore Surprisingly Affordable Luxury RAM 1500

explore surprisingly affordable luxury ram 1500...

February 1, 2025

3:33 am

Need a new Car? Rent To Own Cars No Credit Check

need a new car? rent to own cars no credit check ...

February 1, 2025

3:39 am

Explore The 2025 Jeep Compas: Adventure Awaits!

explore the 2025 jeep compas: adventure awaits!...

February 1, 2025

3:33 am

2025 Jeep Wrangler Price One Might Not Want to Miss!

2025 jeep wrangler price one might not want to miss!...

February 1, 2025

3:31 am

A growing fear of the future

However, in his next statement, Adler revealed the true reason behind his departure.

“Honestly, I’m pretty terrified by the pace of AI development these days. When I think about where I’ll raise a future family, or how much to save for retirement, I can’t help but wonder: Will humanity even make it to that point?” he wrote.

Adler warned that the AGI race is an extremely risky gamble, with no major AI lab having a definitive solution to the alignment problem—the challenge of ensuring AI systems act in alignment with human values.

Drive into the Future with the 2025 Subaru Forester

drive into the future with the 2025 subaru forester...

February 1, 2025

3:28 am

Want an SUV with Easy Access and Comfort for Seniors? Here’s How to Get It!

want an suv with easy access and comfort for seniors? here’s how to get it!...

February 1, 2025

3:15 am

Celebrate the Holidays in a New Hyundai Palisade

celebrate the holidays in a new hyundai palisade...

February 1, 2025

3:20 am

Explore Surprisingly Affordable Luxury RAM 1500

explore surprisingly affordable luxury ram 1500...

February 1, 2025

3:22 am

He also pointed out that intense competition among AI labs pressures companies to accelerate development, even when ethical and safety concerns remain unresolved.

Seeking solutions for AI safety

After stepping away from OpenAI, Adler is now exploring AI safety and policy solutions.

“I’m enjoying a break for a bit, but I’m curious: what do you see as the most important & neglected ideas in AI safety/policy? I’m especially excited re: control methods, scheming detection, and safety cases,” he concluded in his post.

Need a new Car? Rent To Own Cars No Credit Check

need a new car? rent to own cars no credit check ...

February 1, 2025

3:40 am

Explore The 2025 Jeep Compas: Adventure Awaits!

explore the 2025 jeep compas: adventure awaits!...

February 1, 2025

3:34 am

2025 Jeep Wrangler Price One Might Not Want to Miss!

2025 jeep wrangler price one might not want to miss!...

February 1, 2025

3:37 am

Drive into the Future with the 2025 Subaru Forester

drive into the future with the 2025 subaru forester...

February 1, 2025

3:30 am

Adler’s resignation comes amid growing concerns from top AI researchers about the risks of uncontrolled AI development.

Geoffrey Hinton’s dire warning

Not long ago, Geoffrey Hinton, widely known as the “godfather of AI”, warned that AI could lead to human extinction within the next 30 years.

The British-Canadian computer scientist, who was awarded the 2024 Nobel Prize in Physics for his work on neural networks, estimated a 10% to 20% chance that AI could cause humanity’s downfall within three decades.

Want an SUV with Easy Access and Comfort for Seniors? Here’s How to Get It!

want an suv with easy access and comfort for seniors? here’s how to get it!...

February 1, 2025

3:19 am

Celebrate the Holidays in a New Hyundai Palisade

celebrate the holidays in a new hyundai palisade...

February 1, 2025

3:16 am

Explore Surprisingly Affordable Luxury RAM 1500

explore surprisingly affordable luxury ram 1500...

February 1, 2025

3:23 am

Need a new Car? Rent To Own Cars No Credit Check

need a new car? rent to own cars no credit check ...

February 1, 2025

3:22 am

Hinton has repeatedly compared humans to toddlers when faced with the growing capabilities of AI.

“Imagine yourself and a three-year-old. We’ll be three-year-olds,” he said, emphasizing the potential intelligence gap between humans and future AI systems.

The future of AI safety

As AI advances at an unprecedented pace, experts warn that governments, researchers, and tech companies must take AI safety seriously.

With top minds in the field stepping away from leading AI labs, the debate over ethical AI development and the risks of AGI is becoming more urgent than ever.