Software architects and system architects will not be replaced anytime soon by generative artificial intelligence (AI) or large language models (LLMs), Avraham Poupko said. They will be replaced by software architects who know how to leverage generative AI and LLMs, and just as importantly, know how NOT to use generative AI.
Avraham Poupko gave a talk about how he uses artificial intelligence in his daily work as an architect at OOP conference.
The word LLM means Large Language Model. Poupko argued that the question of how humans and machines differ is a fundamental question, and understanding it is critical to the understanding of LLMs.
Humans do not really have a language model, they have a world model, Poupko said. Humans have an understanding of the world that consists of an understanding of how the objects in the world behave, and how they interact with each other. That world model is the result of many experiences and interactions, Poupko explained:
When we use spoken and written language to communicate about the world, that is only one representation of the world model and a very limited representation at that.
LLMs on the other hand only have a word or language model, Poupko mentioned. The LLMs only know how words relate to each other. While that model often does give an astonishing illusion of understanding and comprehension, it is not real understanding. It is just a sequence of words, he said.
While it is true that a great deal of world knowledge can be captured in texts and in words, other parts of our world knowledge and world understanding are based on experience and cannot be properly captured in words, Poupko said. This is particularly true with situations that are highly contextual and where the person present is aware of the context, but where not all the context is verbal, he added.
Architects and large language models can work together to create better software architecture, Poupko said, which is somewhat similar to how humans and books can work together:
Humans will use books to learn and understand. A human will decide if the case at hand is similar to the case outlined in the book, and if so will apply the knowledge learned from the book in a context-appropriate way.
That is exactly what we do with an LLM, Poupko stated. We give it a prompt or series of prompts and receive a response. The response is usually neither correct nor incorrect, rather the response is either useful or not useful (as the famous George Box quote goes "All models are wrong. Some are useful"), he mentioned. When we say useful, we mean useful to humans. It is the human that gets to decide if indeed the model is useful and in what context, and then decides to apply it, he mentioned.
Poupko mentioned that that AI is most useful in tasks that involve written language. A case where he often uses LLMs is when there is a need to read a requirements document and discover ambiguities, i.e. those cases where a single requirement can mean multiple things.
In the talk, he gave an example where an online system had the requirement:
The system should be able to handle a large number of users.
When asked to detect ambiguities, the LLM he was working with, detected two ambiguities:
- The term "a large number of users" is not useful as it is not specific enough. How many users is a "large number"? 100? 1,000,000?
- The term "a large number of users" can either mean a database that supports a large number of users that are registered to the system, or it can mean a large number of concurrent users. It might of course mean both.
Next Poupko used the LLM to explore what information was needed in order to resolve these ambiguities.
AI does not do design work for me, Poupko said. The system knowledge, the domain knowledge, and the organizational knowledge needed to do effective architecture are such that AI can’t replace me, he concluded.