Amazon Web Services has integrated Bria AI's latest text-to-image foundation models into Amazon SageMaker JumpStart, marking a significant expansion of its enterprise-grade generative AI capabilities. The addition includes three variants - Bria 2.3, Bria 2.2 HD, and Bria 2.3 Fast, each designed to address specific enterprise needs in visual content generation.
This move brings Bria AI's visual content to a wider audience of developers and enterprises. Bria claims their training "on commercial-grade licensed data, providing high standards of safety and compliance with full legal indemnity" addresses a critical concern for enterprises regarding AI-generated content.
Bria 2.3 serves as the core model, focusing on photorealism and detail rendering across various artistic styles. Bria 2.2 HD specializes in high-definition output, promising "crisp and clear" details that meet the demands of high-resolution applications. Bria 2.3 Fast deployed on SageMaker g5 instances delivers improved latency and throughput compared to Bria 2.3 and Bria 2.2 HD. Deploying using p4d instances can further reduce latency by half.
Bria 2.3, the base model is accessible through the /text-to-image/base endpoint. The model implements four specific guidance methods: controlnet_canny, controlnet_depth, controlnet_recoloring, and controlnet_color_grid, each providing distinct control mechanisms over the output generation process. Bria 2.3 Fast represents the performance-optimized variant, accessible through the /text-to-image/fast endpoint. The model employs Latent Consistency Model (LCM) distillation techniques to achieve faster response time. Bria 2.2 HD, accessed via the /text-to-image/hd endpoint, achieves high-resolution output generation. The model supports two specific resolution configurations: 1920x1080 pixels for standard aspect ratios and 1536x1536 pixels for square format outputs.
Amazon SageMaker JumpStart offers a wide range of foundational models (FMs) available for ML practitioners to deploy on dedicated, network-isolated SageMaker instances. Practitioners can customize these models using SageMaker’s integrated tools for model training and deployment, accessible either through the Amazon SageMaker Studio interface or the SageMaker Python SDK. SageMaker JumpStart supports comprehensive model performance tracking and MLOps controls using features such as Amazon SageMaker Pipelines, Debugger, and container logs, making it easier to manage and optimize ML workflows.
The integration uses SageMaker JumpStart's infrastructure, allowing organizations to deploy these models within their virtual private cloud (VPC) environments. Bria models are available today for deployment and inferencing in SageMaker Studio in 22 AWS regions where SageMaker JumpStart is available. Bria models will require g5 and p4 instances for deployment.
Developers can access Bria models from the JumpStart navigation pane, where they can view model details including licensing, training-data information, and deployment options. The platform requires an AWS Marketplace subscription before deployment, with the process handling both initial setup and endpoint configuration.
The deployment workflow integrates with AWS's infrastructure through multiple launch methods, with the SageMaker console offering the most straightforward path. The system supports five instance types: ml.g5.2xlarge, ml.g5.12xlarge, ml.g5.48xlarge, ml.p4d.24xlarge, and ml.p4de.24xlarge, requiring appropriate account-level service limits. After selecting an instance type, users create an endpoint configuration and deploy the model, with SageMaker managing the infrastructure provisioning.
Testing capabilities exist through both SageMaker Studio's interface and notebook environments. The platform supports inference through sample request payloads, with the studio interface providing immediate visual feedback. For programmatic access, developers utilize the SageMaker Python SDK to interact with deployed endpoints, enabling integration into existing workflows and applications.
The models claim particular strength in generating images from detailed prompts, "photography, dynamic, in the city, professional male skateboarder, sunglasses, teal and orange hue", "young woman with flowing curly hair stands on a subway platform, illuminated by the vibrant lights of a speeding train, purple and cyan colors", "close up of vibrant blue and green parrot perched on a wooden branch inside a cozy, well-lit room" and "light speed motion with blue and purple neon colors and building in the background" generated images showing strong understanding of complex visual concepts and style directions.
Source: Bria Model Generated Images
Cloud analyst Toni Witt emphasized, "The outputs of the Bria platform do not infringe copyright laws. Licensed from artists, repositories, and media companies, the training data, or image set, is highly vetted to exclude toxic data that could show up in the outputs again".
Aravind Bharadwaj, investment director at Intel Capital, explained: "What if, instead of indiscriminately scraping the web for any and all data for model training, one only used approved sources of data? What if content creators were provided attribution and monetary compensation for usage? What if platform users did not have to worry about inadvertently violating someone else's copyrights? These are the questions BRIA 's founders asked themselves and the underlying principles upon which BRIA was built".
Gabrielle Chou, serial entrepreneur and adviser at Photoroom, sounds caution: "These recent developments underscore the evolving legal and ethical landscape surrounding the use of copyrighted works in AI training. However, for companies looking to adopt GenAI technologies, this presents an exciting opportunity to lead in innovation while navigating these challenges responsibly".
In addition to being available on AWS, Bria models are available through Hugging Face and NVIDIA's NIM catalog. Developers and organizations can also explore and run in a playground environment at no cost, enabling experimentation before commitment. Here are a few Bria competitors in the commercial text-to-image space.