InfoQ Homepage Search Content on InfoQ
-
Native Vector Support in Azure SQL Database in Public Preview
Azure SQL Database now supports native vector storage and processing, streamlining AI development by integrating vector search with SQL queries. This update simplifies database management, enhances data analysis, and boosts performance by eliminating data movement. Ideal for diverse applications, it empowers sectors like e-commerce and healthcare with advanced, context-aware functionalities.
-
Optimizing Uber's Search Infrastructure: Upgrading to Apache Lucene 9.5
Uber Engineering recently announced an upgrade to their search infrastructure, transitioning from Apache Lucene 8.0 to version 9.5. This upgrade improves Uber's search capabilities, performance and efficiency across their various services.
-
Grab Employs LLMs for Conversational Data Discovery with GPT-4, Glean and Slack
Grab responded to the challenges of finding valuable datasets among 200k+ tables by enhancing Hubble, the data discovery tool, with new capabilities leveraging GenAI technologies. The company reduced the data discovery process by incorporating LLMs to generate dataset documentation and created a Slack bot to bring effective data discovery to data consumers.
-
Generative AI Capabilities for Logic Apps Standard with Azure OpenAI and AI Search Connectors
Microsoft has announced that the Azure OpenAI and Azure AI Search connectors for Logic Apps Standard are now generally available, following an earlier public preview. These connectors are fully integrated into Azure Integration Services, providing developers with powerful tools to enhance application functionality with advanced AI capabilities.
-
Netflix Uses Elasticsearch Percolate Queries to Implement Reverse Searches Efficiently
Netflix engineers recently published how they use Elasticsearch Percolate Queries to "reverse search" entities in a connected graph. Reverse search means that instead of searching for documents that match a query, they search for queries that match a document, powering dynamic subscription scenarios where there is no direct association between the subscriber and the subscribed entities.
-
Amazon OpenSearch Zero ETL with S3 and New OR1 Instances
Amazon has announced the preview of the Amazon OpenSearch Service's zero-extraction, transformation, and loading (ETL) integration with Amazon S3, offering a novel method to analyze operational logs in Amazon S3 and S3-based data lakes without the need to switch between services. Amazon also announced the new OR1 instances for Amazon OpenSearch Service.
-
Practical Advice for Retrieval Augmented Generation (RAG), by Sam Partee at QCon San Francisco
At the recent QCon San Francisco conference, Sam Partee, principal engineer at Redis, gave a talk about Retrieval Augmented Generation (RAG). He discussed Generative Search, which combines large language models (LLMs) with vector databases to improve information retrieval. Partee discussed several innovative tricks such as Hypothetical Document Embeddings (HyDE), and semantic caching.
-
Google Expands Vertex AI Search and Conversation Capabilities
At its Google Cloud Next conference, Google officially introduced new capabilities for its enterprise AI platform, Vertex AI, which aim to enable more advanced user workflows, among other things.
-
Vector Engine for Amazon Opensearch Serverless Now in Preview
AWS announced the preview release of vector storage and search capability within Amazon OpenSearch Serverless. The capability is intended to support machine learning augmented search experiences and generative AI applications.
-
Microsoft Introduces the Public Preview of Vector Search Feature in Azure Cognitive Search
At its annual Inspire conference, Microsoft recently announced the public preview of Vector search in Azure Cognitive Search, a capability for building applications powered by large language models. It is a new capability for indexing, storing, and retrieving vector embeddings from a search index.
-
AWS Adds Multi-AZ with Standby Support to OpenSearch Service
OpenSearch Service recently introduced support for Multi-AZ with Standby, a new deployment option for the search and analytics engine that provides 99.99% availability and better performance for business-critical workloads.
-
GitHub Overhauls Code Search Using New Search Engine
GitHub has introduced its new code search feature, including a redesigned search interface, a new code view, and a search engine rebuilt from scratch to be faster, more capable, and to better understand code, says GitHub software engineer Colin Merkel.
-
AWS Announces Amazon OpenSearch Ingestion for Streamlined Data Ingestion
AWS recently announced Amazon OpenSearch Ingestion, a capability of Amazon OpenSearch Service that provides a serverless, auto-scaled, managed data collector that receives, transforms, and delivers data to Amazon OpenSearch Service domains or Amazon OpenSearch Serverless collections.
-
Amazon OpenSearch Service Introduces Security Analytics
Amazon recently announced the general availability of security analytics for OpenSearch Service. The new capability of the successor of ElasticSearch Service provides threat monitoring, detection, and alerting features to help manage security threats.
-
AWS OpenSearch Serverless Now Generally Available
Amazon recently announced the general availability of OpenSearch Serverless, a new serverless option for Amazon OpenSearch service, which automatically provisions and scales the underlying resources for faster data ingestion and query responses.