OpenAI Drama Continues

Joe Biden Mentioned “AI” In His Speech Last Night, So Here He Is

Welcome Artisan,

OpenAI can’t keep drama away from its doorstep.

A day after we wrote about them exposing old emails of Elon Musk, a new piece in the New York Times alleged that the company’s CTO, Mira Murati, played a large role in Sam Altman’s initial ousting from the company in November 2023.

They even talk of Sam’s “history of manipulative behavior.”

Let’s dive in.

Hot Off The Press

The One Big Thing

What is the one big thing?

Mira Murati, CTO of OpenAI

I’d like to reiterate that this is not a news or gossip newsletter. But when the board and C-suite of the most in-the-moment AI company of the time call out its CEO for manipulative behavior, we have to talk about it.

The Basics:

  • Sam Altman, CEO of OpenAI, was temporarily ousted by the company's board of directors more than three months ago

  • An upcoming report from an outside law firm is expected to offer insights into the board's decision and the events surrounding Altman's temporary removal

  • Murati, OpenAI’s CTO, drafted a private memo to Altman critiquing his management style and shared concerns with the board, influencing their decision to oust Altman

  • Ilya Sutskever, co-founder and Chief Scientist at OpenAI, expressed worries about Altman's allegedly manipulative behavior and highlighted issues of creating a toxic work environment

Why Does This Matter?

It’s now becoming a consensus view that we are approaching AGI.

How we get there, however, and which people will get us there, is anyone’s guess to this point. Obviously, the stakes here are as high as they come. I’ve called AGI the most powerful technology humans will ever create (and I am not being hyperbolic), so it stands to reason that we need to get this one right.

So when executives at OpenAI, the company accelerating the AGI movement the hardest right now, show signs of concern around their leader, it should really be a signal for the rest of us.

Sam Altman seems like a fine person from the outside, but the lens I look at this with is: “Who do we want in charge of the nuclear arsenal of the whole world?” At this stage, the only wrong answer is anyone who answers with themself, but maybe we can add anyone with a history of manipulative behavior to that list as well.

All of this is to say that we should be very careful with OpenAI, or any other company’s stated intentions. There is too much power at stake to take anyone’s word at face value; the best we can hope for is open systems that level the field of access without permitting too much control to fall into the wrong hands.

Whose hands those are is anyone’s guess.

Generate Characters and Play with them in 3D

A new application for the chess enjoyers…

Real time text to speech model

Tools

Must have tools for every Renaissance creator to add to their toolkit:

Deep Tech

The newest and coolest in the research world that you need to know about:

  • 𝗞𝗜𝗪𝗜: a dataset with instructions for knowledge-intensive, document-grounded writing for long-form answers to research questions

  • CVPR 2024 paper: Self-correcting LLM-controlled Diffusion Models (SLD)

  • H2O (Human2HumanOid): An RL-based human-to-humanoid real-time whole-body teleoperation framework

  • Amazon Reviews 2023 dataset: With 500+M user reviews, 48+M items, 60+B tokens

  • PixArt-Σ: Weak-to-Strong Training of Diffusion Transformer for 4K Text-to-Image Generation

Closing Thought

Deepfake State of the Union addresses are coming

Biden was looking healthy last night though

Work With Us!

The AI Renaissance is coming and we are building the best community of the people making it happen.

Contact us to sponsor your product or brand and reach the exact audience for your needs across our newsletter and podcast network.