InfoQ Homepage Security Content on InfoQ
-
Google’s Cybersecurity Model Sec-Gemini Enables SecOps Workflows for Root Cause and Threat Analysis
Google’s new cybersecurity model Sec-Gemini focuses on cybersecurity AI to enable SecOps workflows for root cause analysis (RCA) and threat analysis, and vulnerability impact understanding.
-
How Meta Uses Precision Time Protocol to Handle Leap Seconds
For systems that require strict synchronization—like distributed databases, telemetry pipelines, or event-driven architectures—handling leap seconds incorrectly can lead to data loss, duplication, or inconsistencies. As such, managing leap seconds accurately ensures system reliability and consistency across environments that depend on high-precision time.
-
QCon London 2025: Insights from 20+ Years in Mission-Critical Infrastructure
Matthew Liste, head of infrastructure at American Express, shared insights at QCon London 2025 on building robust cloud platforms in financial services. With 20+ years of experience, he emphasized stability, security, scalability, the value of interchangeable components, and long-term sustainability, urging professionals to maintain focus and foster a strong team culture for platform engineering.
-
GitHub Leverages AI for More Accurate Code Secret Scanning
GitHub has unveiled a groundbreaking AI-driven secret scanning feature within Copilot, enhancing password detection in code while significantly reducing false positives. By leveraging advanced context analysis and collaboration with Microsoft, GitHub ensures robust repository security. Experience a new era of code integrity with precision-driven technology now available for all users.
-
Google Report Reveals How Threat Actors Are Currently Using Generative AI
Google's Threat Intelligence Group (GTIG) recently released a report on the adversarial misuse of generative AI. The team investigated prompts used by advanced persistent threat (APT) and coordinated information operations (IO) actors, finding that they have so far achieved productivity gains but have not yet developed novel capabilities.
-
Google Cloud's AI Protection: a Solution to Securing AI Assets
Google Cloud introduces AI Protection, a solution to safeguard against generative AI threats. Managing AI risks through vulnerability assessments, security policies, and proactive threat management enhances asset protection. Integrating with Google’s Security Command Center offers a centralized view of IT posture and advanced security intelligence for robust AI system defense.
-
Google Enhances Data Privacy with Confidential Federated Analytics
Google has announced Confidential Federated Analytics (CFA), a technique designed to increase transparency in data processing while maintaining privacy. Building on federated analytics, CFA leverages confidential computing to ensure that only predefined and inspectable computations are performed on user data without exposing raw data to servers or engineers.
-
Meta Enhances Download Your Information Tool with Data Logs
Meta has recently introduced data logs as part of their Download Your Information (DYI) tool, enabling users to access additional data about their product usage. This development was aimed at enhancing transparency and user control over personal data.
-
GitLab Introduces Advanced Vulnerability Tracking to Tackle Code Volatility and Double Reporting
GitLab has introduced a new feature that addresses two significant challenges in vulnerability management: code volatility and double reporting. Code volatility refers to the frequent changes in codebases that can reintroduce previously resolved vulnerabilities, while double reporting occurs when multiple security tools identify the same vulnerability.
-
Google Cloud Introduces Quantum-Safe Digital Signatures in Cloud KMS to Future-Proof Data Security
Google has introduced quantum-safe digital signatures in its Cloud Key Management Service, adhering to NIST post-quantum cryptography standards. This vital update counters the imminent threats of quantum computing on traditional encryption methods, enabling organizations to integrate resilient, future-proof security measures seamlessly.
-
Ensuring Security without Harming Software Development Productivity
Security can be at odds with a fast and efficient development process. At QCon San Francisco Dorota Parad presented how to create a foundation for security without negatively impacting engineering productivity. She showed how you can make your security strategy almost invisible to the engineers while embedding it deep into the culture at the same time.
-
AWS Launches Trust Center: a Centralized Resource for Security and Compliance Information
AWS Trust Center is a comprehensive online resource that enhances cloud security transparency. It details AWS's security practices, compliance protocols, and data protection controls, making it easier for customers to understand and manage their cloud security. This centralized hub provides real-time service status, security bulletins and essential resources, improving client trust & confidence.
-
Most Companies Experience Weekly Outages: The State of Resilience 2025 Report
According to The State of Resilience 2025 Report, published by Cockroach Labs, outages are commonplace in most organizations, with 55% of companies reporting weekly and 14% reporting daily outages. Staggering 100% of survey participants experienced revenue losses due to outages, with some companies (8%) reporting losses of USD $1 million or higher over the last 12 months.
-
Build Resilient Systems with Insights on AI, Multi-Cloud, Leadership & Security at QCon London 2025
From AI and ML to cloud, leadership, and modern data strategies, QCon London 2025, April 7-10, features 15 tracks of insights from 125+ senior practitioners. Discover practical solutions to scaling architectures, enhancing productivity, securing supply chains, and integrating cutting-edge technologies - all through real-world examples and actionable takeaways.
-
OpenAI Presents Research on Inference-Time Compute to Better AI Security
OpenAI presented Trading Inference-Time Compute for Adversarial Robustness, a research paper that investigates the relationship between inference-time compute and the robustness of AI models against adversarial attacks.