Announcing the AutoAlign™ platform to create safe, reliable and fair Generative AI
Today, we are proud to announce the launch of AutoAlign™, a platform to help enterprises develop and test generative AI models before they hit the market. The capabilities of generative AI are remarkable, but its shortcomings bring significant risks for enterprises. From eliminating bias and hallucinations to enhancing performance and safety, AutoAlign™ is a one stop shop for building responsible enterprise-grade generative AI solutions.
The datasets that are used to train generative AI models contain a vast array of historical human biases that have no place in enterprise applications. These models also have an ability to hallucinate, or generate completely make up information, introducing unpredictability into mission critical applications.
Our AutoAlign™ platform combines guardrails with automated fine-tuning to ensure that generative AI solutions match a company’s use case, values and goals. Companies can choose from a library of off-the-shelf controls, like privacy protection, gender assumption, jailbreaking protection and racial bias mitigation, or create a custom template. This means customers can entirely control how their AI model behaves, expressed in natural language narratives, a synset and syntax based pattern or code. AutoAlign™ works against a growing variety of base models, including OpenAI LLMs like GPT, Stable Diffusion picture generation models, open source models, Bard and others coming soon.
“There’s a growing call within the industry that as generative AI scales, we must ensure it’s safe and responsible. Based on the feedback from our customers and industry experts, we knew that just setting more guardrails on AI models would not be enough. We’re empowering even non-technical users of generative AI to evaluate and enhance the performance of their AI models which is a game changer as there are fewer and fewer professionals focused on responsible AI,” says our CTO Rahm Hafiz. “Our approach is more powerful than introducing guardrails or traditional fine-tuning alone, and is critical to scaling safe, reliable deployments as enterprises race to bring new models to market.”
Our AutoAlign™ platform generates its own data and can fine-tune a customer’s AI model using automated feedback based on how the customer has asked the AI model to behave. Throughout the process, our own AI fixes aligns the target model through a series of steps: understanding the goals for the target model, generating data to capture the bounds of the model’s behavior, iteratively testing, discovering weak spots, and finally either fine-tuning the model and/or creating a customized guardrail model.
The result is a user-friendly platform that automates the development of custom generative AI models in a safe, trustworthy, and ethical way. AutoAlign™ is currently being used by a select group of clients, including:
- HR software companies applying generative AI to their HR processes but requiring fairness to be built into their solutions
- Media outlets understanding their news sources and analyzing for bias
- Visual generation software companies creating brand-awareness campaigns aligned with their consumer base