Skip navigation
6 new ways AI is transforming design and ux

6 New Ways AI Is Transforming Design & UX

By:

LAST UPDATED: 23 September, 2024

It’s no lie AI is rapidly advancing. Back in the good ‘ol days I ran a “web design agency” and a typical UX process went something like this:

  1. User research
  2. Information architecture (sitemap)
  3. Wireframing 
  4. Prototyping
  5. Visual design
  6. Usability testing

With the new capabilities of generative AI, robots are coming for these design and UX tasks—and in some surprising ways.

We demonstrated some of these in our Digitalks webinar presentation “Advanced Use Cases of Gen AI to Transform CX”. You can watch the Digitalks webinar below:


The new (synthetic) user research

Some fascinating research from the past few weeks has demonstrated the effectiveness of using Large Language Models (like ChatGPT) to create Synthetic Users which can be used for market / user research. 

You can now run large scale research studies in a fraction of the time, at a fraction of the cost. There are a raft of new startups offering services in this space.

The new (generated) sitemap

Most website projects are a redesign of an existing site. That’s why Relume (an Australian AI app for website design) have just launched a new feature that allows you to import from a URL. 

Simply provide a URL, and the magic AI robots will generate a sitemap in Relume based on the existing site. From there you can use AI to add to or edit the sitemap before generation of wireframes.

The new (AI robot-assisted) wireframes

Every designer has experimented with image generation tools like Midjourney and Stable Diffusion, which until now have been great at creating visual assets which might be used in a concept design, but have been pretty useless for designing a wireframe.

New image-to-text model such as Flux and Leonardo.AI’s Phoenix model are similar to Midjourney, but with two incredible upgrades; 1) amazing “prompt adherence”, and 2) setting a new benchmark in text generation.

You now can use text-to-image to generate complex layouts that include text, such as web pages and mobile apps. For example, this image of a wireframe was generated just from a text prompt in Flux;

The new AI prototype Artefacts

About a month ago, Claude (a competitor to ChatGPT) released Artefacts, which will code a prototype for you, and it then previews the working prototype in a right-hand panel. It’s incredible. 

I uploaded the wireframe to Claude and prompted it to “Build this into a working functional prototype with interactive buttons” and about 15 seconds later it wrote the code for a clickable wireframe and previewed it on the page. Once built, Artefacts can be shared with others, who can then iterate on them.

The new (generated) visual design

Returning to Flux, and using a variation of the same prompt for the wireframe, I then generated a homepage design concept with a simple text prompt.

Again, with the new capabilities of text generation and prompt adherence, it’s possible to (nearly) create realistic mockups of site designs. By altering the text prompt, you can alter the design. Or, with some image apps (like Leonardo.AI) you can select an area and modify it.

The new (AI machine vision) usability testing

Another capability of Large Language Models like Claude, Gemini and ChatGPT that is improving all the time is machine vision, which is the ability to upload an image and have the LLM interpret it.

This can be used by our robotic UX designer to check a wireframe or a design against best practice usability or accessibility guidelines. If you want to be really lazy, it can even tell you what those guidelines are.

I asked Claude “What are the Australian Government guidelines for accessibility of webpages?” and it happily answered with what seemed to be a good response. I then uploaded the concept design into Claude and asked it to check the design against the guidelines. It (unsurprisingly) found a lot of issues with my AI generated design! 

Summary

This post is deliberately tongue-in-cheek (robots can’t do this), and the current abilities of each of these tasks could be optimistically called ‘bleeding edge’. In reality, they are all a little bit flakey, and far from perfect. However, they are a) improving quickly, and b) useful only when in the hands of a skilled designer.

All of the major no-code tools are rapidly advancing their use of AI, but this is generally not the space that professional designers play. There will always be room for great design, by great designers, and soon this will be with the help of AI.