A very good start from a respected source! (but how will AGI be CONTAINED and CONTROLLED??)

Learn more:

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.

Meta. Trust and safety. Welcome to Purple Llama.

Empowering developers, advancing safety, and building an open ecosystem.

Why Purple Llama. An approach to open trust and safety in the era of generative AI.

Drawing inspiration from the cybersecurity concept of “purple teaming,” Purple Llama embraces both offensive (red team) and defensive (blue team) strategies. Our goal is to empower developers in deploying generative AI models responsibly, aligning with best practices outlined in our Responsible Use Guide. Our investment in Purple Llama reflects a comprehensive approach, seamlessly integrated to guide developers through the AI innovation landscape, from ideation to deployment. It brings together tactics for testing, improving, and securing generative AI, to support your mitigation strategies.

Tools. Cybersecurity.

We are making available what we believe to be the industry’s first and most comprehensive set of open source cybersecurity safety evaluations for large language models (LLMs): CyberSecEval. Read the paper.

Our evaluation suite measures LLMs’ propensity to generate insecure code and comply with requests to aid cyber attackers.

Testing for insecure coding practice generation

Insecure coding practice tests measure how often an LLM suggests risky security weaknesses in both autocomplete and instruction context and as defined in the Common Weakness Enumeration industry-standard insecure coding practice taxonomy. They also evaluate generation quality with a Bilingual Evaluation Understudy (BLEU) score, an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another, to help ensure secure code is usable.

Insecure Code Detector

At the heart of our insecure coding practice tests is our Insecure Code Detector tool, which can be used to help detect insecure coding practices. It can be used in test case generation and to help detect insecure coding practices in LLM-generated code.

Testing for compliance with requests to help with cyber attacks

Our attack compliance tests address the space of attacker tactics, techniques, and procedures as defined in the industry standard MITRE ATT&CK taxonomy of cyber attacks. This suite tests an LLM’s willingness to comply with requests to help an attacker scale malicious workflows.

By using CyberSecEval developers can start to help improve their LLM’s ability to generate secure and usable code, reduce the risk of an LLM introducing insecure code into production code bases, and reduce the risk that malicious users will leverage a deployed LLM to support cyberattacks.

For more information, read the paper and access the eval.

Our Approach

Responsibility. Responsible Use Guide.

To promote a responsible, collaborative AI innovation ecosystem, we’ve established a range of resources for all who use Llama 2: individuals, creators, developers, researchers, academics, and businesses of any size. The Responsible Use Guide is a resource for developers that provides recommended best practices and considerations for building products powered by LLMs in a responsible manner, covering various stages of development from inception to deployment. Responsible Use Guide.

Open approach Our investment in open trust and safety with Purple Llama.

With Purple Llama we are furthering our commitment to an open approach to build generative AI with trust and safety in mind. We believe in transparency, open science, and cross-collaboration, and to date, we’ve released over a thousand open-source libraries, models, datasets, and more. The launch of Purple Llama is another contribution towards creating an open AI ecosystem, involving academics, policymakers, industry professionals, and society at large in the responsible development of generative AI. Learn more

Partnerships. Ecosystem.

In fostering a collaborative approach, we look forward to partnering with ML commons, the newly formed AI alliance, and a number of leading AI companies. We’ve also engaged with our partners at Papers With Code and HELM to incorporate these evaluations into their benchmarks, reinforcing our commitment through active participation within the ML Commons AI Safety Working Group. Partners include: AI Alliance, AMD, Anyscale, AWS, Bain, Cloudflare, Databricks, Dell Technologies, Dropbox, Google Cloud, Hugging Face, IBM, Intel, Microsoft, MLCommons, Nvidia, Oracle, Orange, Scale AI, Together.AI, and many more to come

FOR EDUCATIONAL AND KNOWLEDGE SHARING PURPOSES ONLY. NOT-FOR-PROFIT. SEE COPYRIGHT DISCLAIMER.