BT

Facilitating the Spread of Knowledge and Innovation in Professional Software Development

Write for InfoQ

Topics

Choose your language

InfoQ Homepage Articles Efficient DevSecOps Workflows with a Little Help from AI

Efficient DevSecOps Workflows with a Little Help from AI

Key Takeaways

  • AI is enhancing DevSecOps workflows by streamlining tasks, improving security, and optimizing operations. Utilize AI for generating code suggestions, automated tests, and insightful chat prompts to enhance productivity.
  • Efficiently address security vulnerabilities with AI's explanations and proposed fixes. Use AI for root cause analysis, log summarization, and performance optimization in your operations.
  • Implement required guardrails, including data privacy controls, access management, and prompt validation, to ensure responsible and secure AI usage.
  • Monitor and measure the impact of AI on your workflows through metrics and dashboards, adapting strategies as needed.
  • Explore advanced AI techniques such as Retrieval Augmented Generation (RAG) and custom models for further optimization, and stay in touch with the evolving field of AI agents and prompt engineering.

DevSecOps is a powerful approach towards software development, enabling faster delivery and improved efficiency.

During my QCon London 2024 presentation, I explored how teams face varying levels of inefficiency in their DevSecOps processes, hindering progress and innovation.

I highlighted common issues like excessive debugging time and inefficient workflows, while also demonstrating how Artificial Intelligence (AI) can be a powerful tool to streamline these processes and boost efficiency.

Cloud Native - DevSecOps

Let's explore DevSecOps and its connection to cloud native. As you navigate the DevSecOps journey, consider your current stage. Are you deploying, automating tests, utilizing a staging environment, or starting from scratch?

Throughout this discussion, I encourage you to identify the most inefficient task you currently face. Is it issue creation, coding, testing, security scanning, deployment, troubleshooting, root cause analysis, or something else?

Imagine the potential of AI to enhance efficiency. However, it's crucial to acknowledge the importance of workflow diversity. We'll explore specific examples later.

Establishing guardrails for AI is essential. Ensure data security and prevent leaks from your environment. Additionally, it's vital to measure the impact of AI. Don't implement AI simply because others are. Build a compelling case and demonstrate its value. We'll delve into this aspect as well.

AI in Development Workflows

In the fast-paced world of software development, simplifying workflows is crucial for efficiency and success. In 2024, 70% of respondents shared that it takes developers in their organization more than a month to onboard and become productive, up from 66% in 2023 (source: GitLab 2024 Global DevSecOps Survey). AI is ready to restructure the way we work. By using AI in our workflows, we can unlock a lot of benefits, including improved efficiency, reduced time spent on repetitive tasks, enhanced understanding of code, increased collaboration and knowledge sharing, and a streamlined onboarding process.

When it comes to software development, AI offers lots of possibilities to enhance workflows at every stage—from splitting teams into specialized roles such as development, operations, and security to facilitating typical steps like planning, managing, coding, testing, documentation, and review.

AI-powered code suggestions and generation capabilities can automate tasks like autocompletion and identification of missing dependencies, making coding more efficient. Additionally, AI can provide code explanations, summarizing algorithms, suggesting performance improvements, and refactoring long code into object-oriented patterns or different languages.

AI's impact extends beyond development into the realm of operations as well. By analyzing shorter descriptions, AI can generate comprehensive issue descriptions, saving valuable time and resources. It can also summarize lengthy discussions and issue descriptions, making it easier for team members to stay informed and engaged. 

Real-World Examples of AI in Action

Anthropic Claude Workbench is a powerful tool to develop and run AI prompt queries on the underlying Large Language Models (LLMs). For instance, a simple prompt can generate comprehensive guidance on starting a Golang project, complete with CLI commands, CI/CD configuration for GitLab, and even OpenTelemetry instrumentation. This eliminates the need to sift through countless tabs and resources, saving time and boosting efficiency, especially for new team members.

Additionally, AI can help craft detailed issue descriptions, transforming a brief idea into a comprehensive proposal. In the example provided, the tool explored whether to instrument the source code using SDKs or auto-instrumentation as appropriate. This is a great way to kickstart discussions and explore different solutions.

Furthermore, AI proves valuable for summarizing lengthy discussions and plans, allowing for quick comprehension of complex issues. By pasting the content into the Anthropic Claude 3 Workbench, it efficiently condenses the information, enabling faster decision-making and a more focused approach.

On the other hand, AI showcases its versatility by summarizing long issue descriptions, generating Kubernetes observability CLIs in Go, refactoring Go code into Rust, and recommending reviewers for merge requests. 

AI-Powered Operations: Incident Response, Observability, and Cost Optimization

Shifting focus from development to operations, let's explore how AI can revolutionize root cause analysis, observability, error tracking, performance, and cost optimization. A common pain point is a stalled CI/CD pipeline, like in the modified XKCD 303 comic.

Instead of manually sifting through job logs, AI can analyze them and provide actionable insights, even suggesting fixes. By refining prompts and engaging in conversations with the AI, developers can quickly diagnose and resolve issues, even receiving tips for optimization.

Security is crucial, so sensitive data like passwords and credentials must be filtered before analysis. A well-crafted prompt can instruct the AI to explain the root cause in a way any software engineer can understand, accelerating troubleshooting. This approach can significantly improve developer efficiency.

Moving on to cloud-native deployments, Kubernetes failures can be a nightmare. However, tools like k8sgpt, a CNCF sandbox project, leverage LLMs to analyze deployments and offer suggestions from an SRE or efficiency perspective. It works with various LLMs, even running locally with Ollama on a MacBook.

Observability is another key aspect of operations, and AI can streamline log analysis. During incidents, AI can summarize vast amounts of log data to pinpoint root causes faster, aiding swift resolution. Honeycomb's integration of AI into their product exemplifies this approach, offering query assistants and other AI-powered features for complex observability tasks.

Finally, sustainability monitoring is gaining traction, with tools like Kepler using eBPF and machine learning to forecast power consumption in Kubernetes environments. This empowers organizations to optimize for cost and sustainability, reducing their carbon footprint.

These examples demonstrate how AI is transforming operations, boosting efficiency, and driving innovation in various areas.

AI in Security Workflows

Shifting our focus to security workflows, AI can be a powerful ally in understanding and mitigating vulnerabilities, enhancing security scanning, and addressing supply chain concerns. 67% of developers said a quarter or more of the code they work on is from open source libraries — but only 21% of organizations are currently using a software bill of materials (SBOM) to document the ingredients that make up their software components (source: GitLab 2024 Global DevSecOps Report).

Reflecting on a past security incident where a CVE in an open-source tool led to unintended consequences, it's clear that a deeper understanding of vulnerabilities and their long-term fixes is crucial.

AI can help by explaining vulnerabilities in simple terms, clarifying concepts like format string vulnerabilities, command injection, timing attacks, and buffer overflows. By understanding how malicious attackers exploit vulnerabilities, developers can implement effective fixes without introducing regressions or compromising code quality.

Using prompts like "Explain this vulnerability as a software security engineer", AI can analyze code snippets, provide examples of potential exploits, and suggest robust fixes. Moreover, AI can even generate merge requests or pull requests with proposed code changes, automating the remediation process and ensuring security scans and CI/CD pipelines validate the fixes.

This streamlined approach not only saves time but also reduces the risk of human error, making vulnerability management more efficient and effective.

AI Guardrails - Privacy, Data and Security Performance, Validation

Shifting our attention to AI guardrails, it's crucial to address privacy, data security, performance, and the overall suitability of AI in our workflows. First and foremost, data usage must be scrutinized. Your data, including source code, should not be used to train AI models due to potential leaks. Proprietary data should not be sent to external providers for analysis, especially in regulated environments like banks or government agencies.

Additionally, if AI features utilize chat history, data retention policies and deletion practices should be transparent. It's essential to have a public statement on data usage and privacy, and to inquire with your DevOps or AI provider about their policies.

Security is another paramount concern. Access to AI features and models should be controlled, with governance mechanisms in place to define who can use them. Furthermore, safeguards should be implemented to prevent sensitive content from being sent to prompts.

Validation of prompt responses is crucial to avoid exploitation. Clear guidelines and requirements for team members are necessary to ensure responsible and ethical use of AI tools.

Transparency is key. Documentation on AI usage, development, and updates should be readily available. Additionally, having a plan for addressing AI failures or model updates is essential to maintain productivity. An AI Transparency Center, like the one at GitLab, can provide valuable insights and information.

Performance monitoring is vital, whether you're using SaaS APIs, self-managed APIs, or local LLMs. Observability tools like OpenLLMetry and LangSmith can help track the behavior and performance of AI in your workflows.

Finally, validation of LLMs is crucial due to the potential for hallucinations. Frameworks for testing and metrics for evaluation are essential to ensure the quality and reliability of AI-generated responses.

By diligently addressing these considerations, you can harness the power of AI while minimizing risks and ensuring responsible and effective integration into your workflows.

AI Impact

Transitioning from guardrails to impact, measuring the effects of AI on development workflows poses a new challenge. It's crucial to move beyond traditional developer productivity metrics and explore alternative approaches.

Consider incorporating DORA metrics alongside team feedback and satisfaction surveys. Additionally, monitor code quality, test coverage, and the frequency of failed CI/CD pipelines. Examine whether time to release decreases or remains consistent.

Building comprehensive dashboards to track these metrics is essential, whether through tools like Grafana or other platforms. By analyzing these insights, we can gain a deeper understanding of how AI is impacting our workflows and identify areas for improvement. While the path to accurate measurement is ongoing, continuous exploration and refinement of our methods will lead to a more comprehensive understanding of AI's impact on productivity and overall development outcomes.

AI Adoption

While integrating AI into workflows is promising, it's important to consider guardrails for security, privacy, and data usage, while also validating the AI's impact. There are advanced techniques like Retrieval Augmented Generation (RAG) that can enhance AI capabilities.

RAG addresses the limitations of LLMs trained on older data by incorporating external information sources like documents or knowledge bases. By loading these resources into a vector store and integrating them with the LLM, users can access up-to-date and specific information, even on topics like current Rust developments or the weather in London.

RAG has practical applications, such as creating knowledge base chats for platforms like Discord or Slack. Even complex documents like the GitLab handbook can be loaded and queried effectively.

With tools like LangChain and local LLM providers like Ollama, building your own RAG-powered solutions is accessible. This empowers you to leverage proprietary data without relying on external SaaS providers, ensuring data security and privacy.

AI/LLM Agents

Another area to watch is AI/LLM agents, which are rapidly evolving. They can dynamically gather data to answer complex questions, enhancing accuracy and efficiency. While the technology is still in development, it holds great potential for DevSecOps.

Additionally, consider custom prompts and models for specific use cases. Local LLMs trained on internal data offer security and privacy benefits. Explore proxy tuning as a cost-effective alternative to full retraining for customization. These advanced techniques can further optimize your DevSecOps workflows.

Conclusion

In conclusion, to ensure efficient DevSecOps implementation, there are several key considerations. Firstly, from a workflow perspective, repetitive tasks, low test coverage, and bugs can be addressed by utilizing code suggestions, generating tests, and employing chat prompts. Secondly, in the security domain, addressing security regressions that delay releases requires vulnerability explanations, resolution, and team knowledge building.

Finally, from an operations standpoint, developers spending excessive time on failing deployments can benefit from root cause analysis and tools like k8sgpt. By addressing these considerations, organizations can enhance their DevSecOps practices and streamline their software development and delivery processes.

You can access the public talk slides here, providing additional URLs and references.

About the Author

Rate this Article

Adoption
Style

BT