FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

A very good read from a respected source!

Inc. Technology. Anthropic’s Pitch to Produce Safe A.I. 

The company wants to make generative A.I. safe for customers and communities — not just investors.

By October 2023, the 192-employee company had raised a total of $7.2 billion at a $25 billion valuation – five times more than the company’s value in May. With clients including Slack, Notion, and Quora, Pitchbook forecasts Anthropic’s 2023 revenue will double to $200 million and The Information reported the company expects to reach $500 million in revenue by the end of 2024.

BY PETER COHAN, FOUNDER, PETER S. COHAN & ASSOCIATES @PETERCOHAN

Business leaders should take action to benefit many groups of people. By that I mean, that CEOs should make employees, customers, investors, and communities better off. Companies that trade off the well-being of employees, customers, and communities to reward their investors are taking a big risk.

This comes to mind in considering Anthropic, a San Francisco-based provider of technology used to build Generative AI chatbots. In 2021, Daniela Amodei and Dario Amodei, previously OpenAI executives, started Anthropic out of concern their employer cared more about commercialization than safety, according to Cerebral Valley.

By October 2023, the 192-employee company had raised a total of $7.2 billion at a $25 billion valuation – five times more than the company’s value in May. With clients including Slack, Notion, and Quora, Pitchbook forecasts Anthropic’s 2023 revenue will double to $200 million and The Information reported the company expects to reach $500 million in revenue by the end of 2024.

Caring about employees, customers, and communities

Anthropic’s cofounder Dario Amodei, a Princeton-educated physicist who led the OpenAI teams that built GPT-2 and GPT-3, is Anthropic’s CEO. His younger sister, Daniela Amodei, who oversaw OpenAI’s policy and safety teams, is Anthropic’s president. As Daniela said, “We were the safety and policy leadership of OpenAI, and we just saw this vision for how we could train large language models and large generative models with safety at the forefront,” The New York Times reported

Anthropic’s cofounders put their values into their product. They believe that the company’s Claude 2 – a rival to ChatGPT – can summarize larger documents and produce factually correct, non-offensive results. Claude 2 can summarize up to about 75,000 words – the length of a typical book, Users can input large documents and receive summaries in the form of a memo, letter or story. In contrast, ChatGPT can handle a much smaller input of about 3,000 words, according to the Times.

Arthur AI, a machine learning monitoring platform, concluded Claude 2 had the most “self-awareness” – meaning it accurately assessed its knowledge limits and only answered questions for which it had training data to support, CNBC wrote. Anthropic’s concern about safety caused the company not to release the first version of Claude — which the company developed in 2022 — because employees were afraid people might misuse it. Anthropic delayed the release of Claude 2 because the company’s red-teamers uncovered new ways it could become dangerous, according to the Times.

Using a self-correcting constitution to build safer Generative AI

When the Amodeis started the company, they thought Anthropic would do safety research using other companies’ AI models. They soon concluded innovative research was only possible if they built their own models. To do that, they needed to raise hundreds of millions of dollars to afford the expensive computing equipment required to build the models. They decided Claude should be helpful, harmless, and honest, the Times wrote.

To that end, Anthropic used Constitutional AI – the interaction between two AI models: one operating according to a written list of principles from sources such as the UN’s Universal Declaration of Human Rights and a second AI to evaluate how well the first one followed its principles — correcting it when necessary, according to the Times.

In July 2023, Amodei provided examples of Claude 2′s improvements over the prior version. Claude 2 scored 76.5% on the Bar exam’s multiple choice section, up from Claude’s 73 percent. The newest model scored 71 percent on the Python coding test, up from the prior version’s 56%. Amodai said Claude 2 “was twice as good at giving harmless responses,” CNBC wrote. Anthropic is a valuable Generative AI resource for companies eager to limit a chatbot’s risk to their brand.

Learn More

A Brief History of Anthropic (and Claude) – Cerebral Valley

By: Nick Saraev

The 20th century saw computers revolutionize the way information is shared across the globe. Fast–forward to today, and smartphones are now a ubiquitous part of our everyday lives, providing us with access to almost all of the world’s information in our pockets.

We are now seeing another revolution unfolding – that of sophisticated Artificial Intelligence transforming how we interact with technology.

Anthropic is a company at the forefront of this revolution. It was founded in 2021 by Daniela Amodei and Dario Amodei. The Amodeis are former members of OpenAI – the company responsible for ChatGPT, and other AI-driven content generation such as DALL-E.

The Amodeis left OpenAI due to concerns over the direction it was taking. They believed that advanced AI should value safety first and foremost, and feared that an investment from Microsoft would push OpenAI onto an overly commercial path.

What Makes Anthropic Unique?

Though Anthropic is a relatively new company, it is already disrupting the AI industry with its innovative solutions. One of its primary goals is to understand AI on a deeper level.

AI has incredible potential, so this is unmistakably a necessary goal. Much of AI’s household use so far has been contained to writing papers, creating art, or – a bit less recently – running voice programs such as Siri.

If, or when, AI advances to the point of taking on more delicate roles (such as doing research for fields like healthcare or the law), we will want to better understand how and why it comes to the conclusions that it does. A mistake on an essay is one thing, but a mistake in an essential field could be a matter of life and death.

On their website, Anthropic states that their goal is to address the unpredictability, unreliability, and opaqueness of current AI systems. One way they are addressing safety concerns is through their research on Constitutional AI, wherein an AI self-improves while still following a set of pre-existing principles set by humans.

Anthropic’s goal is to create a future where AI benefits humans – not one where humans are beholden to its whims. As a result, the company has received massive investments from Google, and from one of this past year’s more controversial figures – Sam Bankman Fried, of the recent FTX scandal.

Investment in Anthropic

Sam Bankman-Fried was a self-proclaimed believer in “effective altruism,” an ideology which promotes using resources in a way that will benefit as many people as possible. It is ironic that this man committed eight billion dollars worth of fraud.

Anthropic is taking AI, something likely to be one of the most powerful tools of the 21st century, and gearing it towards benefitting people. Even if SBF was not a true proponent of “effective altruism,” one can see how investing in Anthropic fit his brand.

There are some questions of whether or not his hedge fund, Alameda, will stay involved in Anthropic. Furthermore, some are concerned about whether or not Alameda has control over the company. Although Alameda invested $500 million into Anthropic, they do not have over 50% control.

Some have speculated that Anthropic will have to claw back its shares from Sam Bankman-Fried, and other FTX members. What they will do here remains to be seen.

Anthropic and Google

Another of Anthropic’s largest investors is the tech giant, Google. Google invested $300 million in Anthropic in late 2022.

This investment should come as no surprise. Part of Google’s mission statementis to “organize the world’s information and make it universally accessible and useful”. As it currently stands, this is a large part of what AI does as well. ChatGPT, and Anthropic’s AI, Claude, compile massive amounts of information and relay it to users in ways that fulfill provided prompts.

This investment will Anthropic buy more computing resources from Google’s cloud computing division. Companies like Anthropic require access to platforms from divisions like these in order to handle their large AI systems.

Claude and ChatGPT

You might be wondering – can I make any use of Anthropic’s research right now? Anthropic has opened up early access to Claude, their rival to ChatGPT.

Claude has demonstrated similar capabilities to ChatGPT, as well as a surprising talent for humor (something AI often struggles with). Claude favors the approach of Constitutional AI, and rejects adversarial requests by following its underlying constitutional principles.

It is not yet perfected, and particularly struggles with code more than ChatGPT. Nonetheless, Claude, even in its early stages, reflects Anthropic’s commitment to safe AI–a commitment that we should all be pushing for.

Conclusion

Anthropic sets itself apart from OpenAI and other AI companies through its commitments to research and safety. By researching why AI makes the decisions that it does, Anthropic aims to gear AI towards outcomes which are beneficial to society as a whole.

It is unclear what will happen with Alameda’s investments in Anthropic, but they do not constitute over 50% of the company’s shares. Investment from companies such as Google will allow Anthropic to research and to improve its own AI.

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.