Startups

Mar

Sunday

5

A Brief History of Anthropic (and Claude)

By: Nick Saraev

The 20th century saw computers revolutionize the way information is shared across the globe. Fast–forward to today, and smartphones are now a ubiquitous part of our everyday lives, providing us with access to almost all of the world's information in our pockets.

We are now seeing another revolution unfolding – that of sophisticated Artificial Intelligence transforming how we interact with technology.

Anthropic is a company at the forefront of this revolution. It was founded in 2021 by Daniela Amodei and Dario Amodei. The Amodeis are former members of OpenAI – the company responsible for ChatGPT, and other AI-driven content generation such as DALL-E.

The Amodeis left OpenAI due to concerns over the direction it was taking. They believed that advanced AI should value safety first and foremost, and feared that an investment from Microsoft would push OpenAI onto an overly commercial path.

What Makes Anthropic Unique?

Though Anthropic is a relatively new company, it is already disrupting the AI industry with its innovative solutions. One of its primary goals is to understand AI on a deeper level. 

AI has incredible potential, so this is unmistakably a necessary goal. Much of AI’s household use so far has been contained to writing papers, creating art, or – a bit less recently – running voice programs such as Siri.

If, or when, AI advances to the point of taking on more delicate roles (such as doing research for fields like healthcare or the law), we will want to better understand how and why it comes to the conclusions that it does. A mistake on an essay is one thing, but a mistake in an essential field could be a matter of life and death.

On their website, Anthropic states that their goal is to address the unpredictability, unreliability, and opaqueness of current AI systems. One way they are addressing safety concerns is through their research on Constitutional AI, wherein an AI self-improves while still following a set of pre-existing principles set by humans.

Anthropic’s goal is to create a future where AI benefits humans – not one where humans are beholden to its whims. As a result, the company has received massive investments from Google, and from one of this past year’s more controversial figures – Sam Bankman Fried, of the recent FTX scandal.

Investment in Anthropic

Sam Bankman-Fried was a self-proclaimed believer in “effective altruism,” an ideology which promotes using resources in a way that will benefit as many people as possible. It is ironic that this man committed eight billion dollars worth of fraud.

Anthropic is taking AI, something likely to be one of the most powerful tools of the 21st century, and gearing it towards benefitting people. Even if SBF was not a true proponent of “effective altruism,” one can see how investing in Anthropic fit his brand.

There are some questions of whether or not his hedge fund, Alameda, will stay involved in Anthropic. Furthermore, some are concerned about whether or not Alameda has control over the company. Although Alameda invested $500 million into Anthropic, they do not have over 50% control.

Some have speculated that Anthropic will have to claw back its shares from Sam Bankman-Fried, and other FTX members. What they will do here remains to be seen.

Anthropic and Google

Another of Anthropic’s largest investors is the tech giant, Google. Google invested $300 million in Anthropic in late 2022.

This investment should come as no surprise. Part of Google's mission statement is to "organize the world’s information and make it universally accessible and useful". As it currently stands, this is a large part of what AI does as well. ChatGPT, and Anthropic’s AI, Claude, compile massive amounts of information and relay it to users in ways that fulfill provided prompts.

This investment will Anthropic buy more computing resources from Google’s cloud computing division. Companies like Anthropic require access to platforms from divisions like these in order to handle their large AI systems.

Claude and ChatGPT

You might be wondering – can I make any use of Anthropic’s research right now? Anthropic has opened up early access to Claude, their rival to ChatGPT.

Claude has demonstrated similar capabilities to ChatGPT, as well as a surprising talent for humor (something AI often struggles with). Claude favors the approach of Constitutional AI, and rejects adversarial requests by following its underlying constitutional principles.

It is not yet perfected, and particularly struggles with code more than ChatGPT. Nonetheless, Claude, even in its early stages, reflects Anthropic’s commitment to safe AI–a commitment that we should all be pushing for.

Conclusion

Anthropic sets itself apart from OpenAI and other AI companies through its commitments to research and safety. By researching why AI makes the decisions that it does, Anthropic aims to gear AI towards outcomes which are beneficial to society as a whole.

It is unclear what will happen with Alameda’s investments in Anthropic, but they do not constitute over 50% of the company’s shares. Investment from companies such as Google will allow Anthropic to research and to improve its own AI.