InfoQ Homepage Generative AI Content on InfoQ
-
Researchers Open-Source LLM Jailbreak Defense Algorithm SafeDecoding
Researchers from the University of Washington, the Pennsylvania State University, and Allen Institute for AI have open-sourced SafeDecoding, a technique for protecting large language models (LLMs) against jailbreak attacks. SafeDecoding outperforms baseline jailbreak defenses without incurring significant computational overhead.
-
eBay’s Lessons Learned about Generative AI in Software Development Productivity
Recently eBay disclosed the lessons learned about the application of generative AI in the development process. eBay's AI endeavors have uncovered three pivotal avenues toward enhancing developer productivity: through the integration of commercial offerings, fine-tuning existing Large Language Models (LLMs), and harnessing an internal knowledge network.
-
Google BigQuery Introduces Vector Search
Google recently announced that BigQuery now supports vector search. The new functionality enables vector similarity search required by data and AI use cases such as semantic search, similarity detection, and retrieval-augmented generation (RAG) with a large language model (LLM).
-
Amazon Q Data Integration in AWS Glue Simplifies Data Transformation on AWS
Recently, AWS announced the preview of a new feature for AWS Glue, enabling customers to use natural language for authoring and troubleshooting data integration jobs. With Amazon Q data integration in AWS Glue, developers can provide a description of their data integration workload, and the service will generate an ETL script.
-
NVIDIA Unveils Chat with RTX, a Locally Run AI Chatbot
NVIDIA has introduced Chat with RTX, allowing users to build their own personalized chatbot experience. Unlike many cloud-based solutions, Chat with RTX operates entirely on a local Windows PC or workstation, offering enhanced data privacy and control.
-
InfoQ & QCon Events: Level up on Generative AI, Security, Platform Engineering, and More Upcoming
As we navigate through these transformative times, the upcoming InfoQ events stand as a platform to help you stay ahead, learn valuable insights, and find practical solutions to your development challenges in 2024 and beyond. The events are carefully curated for senior software engineers, architects, and team leaders, offering practitioner insights into emerging trends, patterns, and practices.
-
Harnessing AI-Generated Cloudformation with Application Composer
The AWS Toolkit for VS Code has recently extended its support to include AWS Application Composer, introduced a year ago in the AWS Management Console. This enhancement empowers users to seamlessly craft Infrastructure as Code (IaC) for a comprehensive range of over 1100 AWS CloudFormation resources.
-
Mistral AI's Open-Source Mixtral 8x7B Outperforms GPT-3.5
Mistral AI recently released Mixtral 8x7B, a sparse mixture of experts (SMoE) large language model (LLM). The model contains 46.7B total parameters, but performs inference at the same speed and cost as models one-third that size. On several LLM benchmarks, it outperformed both Llama 2 70B and GPT-3.5, the model powering ChatGPT.
-
Google Announces Video Generation LLM VideoPoet
Google Research recently published their work on VideoPoet, a large language model (LLM) that can generate video. VideoPoet was trained on 2 trillion tokens of text, audio, image, and video data, and in evaluations by human judges its output was preferred over that of other models.
-
OpenAI Adopts Preparedness Framework for AI Safety
OpenAI recently published a beta version of their Preparedness Framework for mitigating AI risks. The framework lists four risk categories and definitions of risk levels for each, as well as defining OpenAI's safety governance procedures.
-
OpenAI Publishes GPT Prompt Engineering Guide
OpenAI recently published a guide to Prompt Engineering. The guide lists six strategies for eliciting better responses from their GPT models, with a particular focus on examples for their latest version, GPT-4.
-
Amazon Q Code Transformation: Automating Java Application Upgrades
AWS has recently announced the preview of Amazon Q Code Transformation, a service designed to simplify the process of upgrading existing Java application code through generative artificial intelligence. The new feature aims to minimize legacy code and automate common language upgrade tasks required to move off older language versions.
-
Microsoft Announces Small Language Model Phi-2
Microsoft Research announced Phi-2, a 2.7 billion-parameter Transformer-based language model. Phi-2 is trained on 1.4T tokens of synthetic data generated by GPT-3.5 and outperforms larger models on a variety of benchmarks.
-
Microsoft's Orca 2 LLM Outperforms Models That Are 10x Larger
Microsoft Research released its Orca 2 LLM, a fine-tuned version of Llama 2 that performs as well as or better than models that contain 10x the number of parameters. Orca 2 uses a synthetic training dataset and a new technique called Prompt Erasure to achieve this performance.
-
Amazon Unveils Titan AI Image Generator
Amazon unveiled Titan Image Generator, currently in preview for AWS customers on Bedrock, Amazon's AI development platform. As a member of Amazon's Titan family of generative AI models, Titan Image Generator has the capability to generate new images based on a text description or customize existing images.