The convergence of artificial intelligence (AI) systems with the agile world is having a disruptive effect on how we build software and the types of products that we build, said Aidan Casey. By combining machine learning and deep learning we can build applications that truly learn like humans. AI bias is a very serious concern, as AI systems are only as good as the data sets used to train them.
Aidan Casey, senior software engineering manager at Johnson Controls, will speak about how artificial intelligence capabilities will be used to augment and shape the agile world of tomorrow at aginext 2019. The conference will be held on March 21 - 22 in London, United Kingdom. According to the event website:
Aginext spends two days dedicated to the future of Agile, Lean, CICD and DevOps transformations. Not intended for novices - alhough everyone is welcome! - Aginext is the first conference of its kind focused on helping seasoned agilists like you feel re-energized in your cultural and technical transformations.
In his talk Casey will explore how the next generation of AI-powered products sees developers moving away from coding nitty gritty business rules to configuring algorithms that are trained by real world data. This changes the landscape when it comes to building software by moving software developers higher up the value chain.
Casey will explain why software products are only as reliable as the data and algorithms that power them. Ensuring that all data streams are accurate and consumed correctly by AI systems will be a primary concern for teams working in software development. A general understanding of data science practices and the different algorithms types (bayesian, logistic etc) is becoming a crucial skill, as Casey will show.
According to Casey, AI bias is a very serious and a real concern for our industry, and something we haven’t really grasped and regulated yet. All AI systems are the product of constructed algorithms and are susceptible to bias. Whether this happens intentionally or not, AI systems are only as good as the data sets that are used to train them.
InfoQ interviewed Aidan Casey about applying artificial intelligence in the agile world.
InfoQ: What has Artificial Intelligence brought us over the years?
Aidan Casey: Work began in the field of Artificial Intelligence in the 1950s with Alan Turing dreaming about creating machines that can think independently. Seventy years later I believe we are now entering the golden age of AI. To illustrate how far AI has advanced, it’s worth considering AI in terms of games where computers are pitted against humans. Gameplay has long been a chosen method for demonstrating the capabilities of thinking machines and for measuring how far AI has advanced.
The watershed began in 1997 when IBM’s Deep Blue beat the reigning world chess champion Garry Kasparov in a six game competition. This is when the world really sat up and started to pay attention to the machines. By today’s standards the AI capabilities of Deep Blue are pretty limited, it was essentially a fast computer built around a highly optimised tree search algorithm. Fast forward to 2011 when IBM’s Watson computer system won the television game show Jeopardy! beating two of the most successful contestants of all time. During the contest Watson had no access to the internet, its cognitive reasoning and natural language processing capabilities surpassed the humans. In 2017 AlphaZero arrived, a truly general purpose game playing machine. AlphaZero mastered the games of chess, go and shogi in just three days by training and playing against itself using unsupervised and reinforced machine learning. We’ve come a long way in seventy years and the machines are better than us at a lot of thinking games.
InfoQ: How can AI assist agile teams in building better products faster?
Casey: Machine learning can be applied to all sorts of use cases from fraud and anomaly detection to business intelligence, medical research, security and image recognition. Data scientists are the pioneers in our brave new world.
Natural Language Processing (NLP) is having an equally transformative effect on product development. Language translation, sentiment analysis, question answering, chatbots and conversational interfaces now blur the lines between where the software ends and where human experiences take over.
InfoQ: How does this impact the processes and practices used to develop software, and the skills that are needed?
Casey: The agile approach of adopting short development cycles with a goal of continually delivering customer value works equally well for cognitive and AI centric projects. For data-intensive work such as building machine learning models the CRISP-DM (cross-industry process for data mining) approach is well worth considering. With this approach, machine learning models are iteratively improved, refined and deployed until the desired result is achieved.
Development processes such as backlog refinement and feature prioritisation are set to become even more data-driven. With the rise of smart self-learning software, teams will have deeper insights into their user’s usage patterns and behaviours. Backlog prioritisation decisions will be more data-driven and come less from the heart.
InfoQ: How can we apply self-learning in software apps or systems today?
Casey: There are a growing number of customer service software products that let you combine your existing knowledge base support with chatbots to provide pre-canned and self learning responses to customer queries. This is a great way to get started with experimenting with self learning capabilities.
Recommendation systems as popularised by Netflix’s movie recommendation feature have made significant advancements in recent years. These can be easily integrated into existing systems to add self learning capabilities. For example, collaborative filtering systems can collect and analyze users' behavioral information in the form of their feedback, ratings, preferences, and feature usage. Based on this information, these systems exploit similarities amongst several users to suggest user recommendations.
InfoQ: How can AI operations bots help us to make troubleshooting easier?
Casey: The emergence of operational chatbots as popularised by github’s open source project hubot are changing the traditional operations paradigm. Work that previously happened offline is now being brought into chat rooms using communication tools such as slack. By doing this, teams are unifying their communications to include everything from development and testing right through to production deployments and resolving system outages.
Another interesting trend is the emergence of NLP and machine learning based analysis of operational logs which is helping to make troubleshooting and root cause analysis much easier. Machine learning models can be trained to analyse event logs and categorise and classify the correct operational responses.
InfoQ: What ethical issues do we need to consider when applying AI?
Casey: In society parents constantly provide feedback and guidance to their children to raise them to be good members of society with a well honed moral compass. Similarly, when designing AI systems we need to train them to ensure that the way data is consumed and processed is ethical and fair.
As an industry there is a need for more regulation to ensure that we do the right thing. The recent Cambridge Analytica controversy highlights the need for closer scrutiny - just because you can build it doesn’t mean you should.
There have been some high profile cases that highlight AI bias issues. In 2017 a university of Virginia computer science professor Vicente Ordóñez noticed a pattern in some of the guesses made by image-recognition software he was building, whereby pictures of people in the kitchen were automatically tagged as women. The root cause of this turned out to be an open source dataset which was used to train the image recognition model called the COCO dataset. In this dataset most kitchen objects such as spoons and cups were pictured alongside women and this introduced the bias.
Unfairly trained AI systems can reinforce an existing bias; this is one of the biggest challenges that we face as an industry. Diversity is the key to eliminating AI bias - we need to employ diverse and fair datasets to train AI systems. Equally, we need diversity in the development teams that build our AI systems to bring balance.