Beyond the prompt

For tools built on systems with vast capabilities, the interaction model of AI is remarkably narrow. It expects users to already know what’s possible, how to ask for it, and how to guide a model through ambiguity. This is a high cognitive load for beginners and an inefficient loop for experienced users. Worse, it discourages exploration.

If AI is magic, it's assumed you're a magician.

Start with structure

To learn the language and how to frame prompts, you need good examples. Offer a selection of relevant starting points or visual templates based on context. Visual tiles, autocomplete patterns, and task-based categories all help users understand the boundaries and capabilities of the tool. Structure accelerates creative input and improves outcomes.

Reduce the friction required to get started without limiting flexibility.

Keep the context

It's starting to become more integrated but AI tools are often siloed from where the real work happens. Text generation lives in a detached chat. Code suggestions live in sidebars. Image tools are often standalone applications requiring upload and download steps.

AI is more helpful embedded directly into the user’s workflow. A writing assistant inside the document. A design assistant operating within the layout tool, informed by the active selection. A code refiner working in-line or adjacent to the snippet being modified.

Embedding also enables contextual awareness. When the system can “see” the user’s current task, recent changes, or relevant data, its suggestions can be more specific, targeted, and useful.

Give feedback

Many AI tools offer little to no visibility into what the system is doing. A spinner might indicate that the model is “thinking,” but it provides no insight into what information it’s processing, what context it’s drawing from, or how confident it is in its output.

Designing with transparency in mind improves trust and usability. Visual cues such as highlighting the text being referenced, showing model memory, or labelling outputs with confidence levels—give users clearer feedback and reduce reliance on guesswork.

These small affordances help users build a mental model of the system’s behaviour, making it easier to predict and guide its output.

Our interfaces are losing their rich interactions and playfulness that inspired exploration. AI doesn't mean we shouldn't have the user receive feedback around the actions they take, we don't have to do this all through text, colour, motion, sound, cues, transitions, inline hints.

Adapt to users

Progressive disclosure has always been a dream of complex tools, gradually adapting over time not just in capability, but in how they interact with individual users. Most tools today treat every session as a fresh start, even though user preferences, tone, and working patterns often remain consistent.

Interfaces should retain and reflect individual user behaviours. If a user consistently prefers short summaries, formal tone, or visual outputs, those preferences should inform defaults and surface relevant options earlier. Feedback loops such as quick rephrasing tools or “suggested next steps” based on prior edits can also help the tool become a more responsive collaborator.

This shifts the dynamic from one-size-fits-all to one that’s informed by your habits and evolves with you.