If You Want Something, Try To Make It Happen
A conversation with Vinod Khosla
James Joaquin |
Vinod Khosa needs little introduction. As the legendary founder of Sun Microsystems and Khosla Ventures, he’s an early backer of transformative companies like GitLab, Stripe, and OpenAI. Known for investing with unwavering values and principles, Vinod has shaped industries and inspired countless entrepreneurs.
I recently sat down with Vinod for a candid fireside chat at our annual Obvious CEO Summit. Vinod offered insights only he could, from his perspective on why open-sourcing AI parallels the Manhattan Project to the story behind the largest check he’s written in 40 years—$50 million to OpenAI. Of course, we couldn’t skip his signature black sneakers with white soles, which he’s been wearing since long before they became the unofficial footwear of AI. He’s been having them made custom for years. “I sort of have this habit,” he says. “If I want something, I try to make it happen.”
Here are some highlights below of the conversation:
James Joaquin: In 2019, you wrote a $50 million check to OpenAI. That’s twice as much as any initial investment you had ever made, and at the time, OpenAI was a nonprofit. What was your rationale?
Vinod Khosla: I was convinced there would be an inflection point in AI capability. I had no idea when. But I figured if these technologies made a profit, then somehow we’d make a return. That was the only time I ever wrote an apologetic letter to my LPs saying, “We are making this investment, it’s high risk, it’s large, makes no sense, but we’re going to make it anyway.” We didn’t ask for permission.
It looks very prescient now.
You know, doing things you believe in actually yields more results if you’re persistent. But it wasn’t a mystery to realize that if this technology works, then it would be big.
Looking forward a little, In June of this year, a former OpenAI employee named Leopold Aschenbrenner published a 150-page technical paper called Situational Awareness that made a pretty compelling bull case for the path to superintelligence, in which he described the orders-of-magnitude advancements in AI. He also called on the U.S. government and leading AI companies to treat this technology as a military-level secret, warning of its potential role in a new arms race among nation-states. What’s your position on that?
I’m a huge China hawk, and the importance of AI as a critical resource in outpacing China cannot be overstated. Long before recent discussions, I was lobbying to ban open-sourcing state-of-the-art AI models—a stance that’s unusual for me, given my deep support for open source. But when it comes to state-of-the-art AI, the stakes are entirely different. AI is so powerful that it is akin to open-sourcing the Manhattan Project. Nobody thinks about doing that.
What’s your forecast for five years from now? What will the regulatory landscape look like?
Predicting the future, even just five years out, is difficult, but I’ve spent the past two years speaking with Republicans and Democrats, and they’re hugely receptive to my arguments. I’ve talked to a national security advisor, I’ve talked to the CIA, I’ve talked to a lot of senators very involved in the issue long before the publication of this. In most cases, the response has been measured and sensible. We’ll take a step-by-step approach to policy as things evolve. I was especially encouraged by how when OpenAI released its latest model, they delayed the release by a month or two to allow the national security apparatus to vet the technology first. More companies seem to be adopting this approach, although not Meta. This might offer a way for open-source AI to coexist with national security concerns. But I strongly believe we shouldn’t open-source cutting-edge technology to China. The Chinese have good research. I’d prefer they did it on their own. I’d prefer it if they published their research and we didn’t.
What’s going to happen to labor displacement from A.I.? And do you see this as a typical technological revolution cycle?
Humanity has experienced many waves of technological progress where old jobs disappear, new ones emerge, and overall growth and prosperity follow. But this time, I think it’s different. In the past, technology served as tools that humans could use—not as replacements for the human mind. Back in 2014, I wrote a blog in Forbes called The Next Technology Revolution Will Drive Abundance and Income Disparity..Even then, I was deeply concerned about the employment implications of technological advancements and started thinking seriously about ideas like universal basic income (UBI).
Disruption sounds great from an entrepreneurial perspective, but it’s devastating if you’re the one being disrupted. Take Klarna, for instance—they replaced 70–80% of their customer support team in just three months, getting rid of 500 to 600 people. That kind of upheaval is alarming, if we don’t address these challenges, there will be resistance, as we’ve already seen. The Screen Actors Guild and writers’ strike pushed back against the use of AI, but ironically, companies that refuse to embrace these tools may struggle to survive in the long term. So I like to say I’m a techno-optimist, but balancing the opportunity with the security risk.
Both our firms have long invested at the intersection of computer science and biology. We recently published a thesis, Generative Science, exploring how modern AI trained on chemistry, physics, and biology could replace human hypotheses in the scientific method to develop new molecules, vaccines, and battery chemistries. Have you thought about this?
Yeah, I believe AI scientists will be the most significant outcome of this, and we’ll see them emerge within 2 to 3 years. These tools could boost productivity and effectively create tens of millions of new scientists five years from now, accelerating the pace of discovery in fields like biology and material science. I’m very optimistic about this and love investing in the area.
You can watch their entire conversation here: