Paul Fiery
1 min readApr 3, 2023

--

Oh come on. OpenAI is ahead of all these other companies and researchers who just happen to be "so concerned" about the "dangers" that they want 6 months to catch up. Notice that Sam Altman of OpenAI is not asking for a pause. How much more transparent can this be?

Meanwhile, the government is delighted to "help." The "help" could easily take the form of an FDA-like agency for technology. Gosh, if AI is so dangerous, perhaps all future innovation should be submitted to such an agency. "Sober" second-tier minds can then engage in "sober second thoughts" to make sure impossible to predict dangers are averted.

By what standard? When the FDA approves a drug, there is a comparatively clear and definite standard that drug must meet. Yet even drug approvals take 10 years. How long would it take such an agency to approve GPT-5 or 6 when there are no possible standards for safety and when any dystopian sci-fi fantasy seems as worthy of precaution as any other?

This is insane. It will halt all progress. Shame on these competitors for running to government to stall OpenAI for them so they can catch up. They risk unleashing the real danger, which is that we won't have the most advanced and powerful tools humankind has ever developed when we will most need them in the future.

--

--

Paul Fiery

Observing. Gathering and curating ideas. Getting ready.