As AI assistants become more dominant in the software industry, one best practice that I have recommended is to invest time and tooling in using AI across the whole SDLC (software development lifecycle).
With that in mind, I have recently been exploring the idea of using LLMs to help with system design. The process of constructing a secure, scalable and modular architecture is one of the more creative and subjective aspects of software engineering. It is a skill developed by engineers as they progress in their career, with much of the art lying in understanding how to choose the right design tool from the many available.
This type of problem-solving lends itself well to an approach using the "double diamond", a phased approach to creativity focused on first exploring and refining the problem, then exploring and refining the solution:
LLMs can be a helpful companion across each of the four phases. I find them particularly appropriate for sketching out ideas in the "Discover" and "Develop" phases, as these are exploratory and benefit from listing out possibilities, a strength of generative AI.
Of course, LLMs don't strictly "sketch", but instead generate sequences of tokens, so existing tools often focus on text-based planning and design. A good example of this is Claude Code's plan mode, which is a great tool for grounding code generation with bullet-pointed action plans.
I find such approaches pragmatic, but a little unsatisfying compared to visual schematics. In lieu of a whiteboard and pen, I instead have been working on finding ways to use LLMs to generate architecture diagrams. Direct text-to-image generation models are terrible at this, but with the right setup LLMs can work well.
The best approaches I have used so far are:
- directly generating schematics as SVGs
- generating schematics using an open standard such as Mermaid
- defining a simplified schema as an in-context learning problem, generating within that schema, and rendering out the result using custom software
Each has its merits, but the third of these has proved the most effective in my experiments. I have ended up developing a growing suite of tools that give me an AI-assisted visual design canvas.
Despite some successful experiments, this remains a gap in AI tooling, and an important one. I hope we will continue to see investment in tools that aid human output across the SDLC, not just code generation. Software engineering has always been about more than writing code: system design, communication, trade-off analysis, understanding what to build in the first place. Our tooling should reflect that.
