A new visual language for an AI-driven identity

Statistics

Client
Design & AI Symposium 2024
Deliverables
Data-driven branding
Interactive tool

The challenge

As AI becomes more and more popular, it's becoming increasingly challenging to steer clear of visual clichés when discussing these tools. When CLEVER°FRANKE was asked to design the visual identity of the upcoming Design & AI Symposium hosted by Eindhoven University of Technology, we wanted to find a unique way to represent AI in our identity design. Our aim was to develop a new visual language for AI using real data to reveal and explain the complexity of modern AI systems.

Value delivered

We created an AI-driven generative identity that not only provides the symposium with unified visuals, but also captivates and informs its diverse audience. Based on a dataset of academic papers on design and AI, the visual language we developed transforms the complexity of AI tools like large language models into a compelling visual narrative that elevates the entire conference experience.

alt

The Design & AI Symposium is a yearly event organized by the departments of industrial design at Eindhoven University of Technology and Delft University of Technology. At the symposium, leading academics, designers and engineers meet to share and explore new perspectives on the latest developments in AI in their respective fields.

CLEVER°FRANKE was asked to design a visual identity that reflected the symposium’s unique perspective. By developing custom software and tools to visualize the underlying complexity of large language models, we created a generative identity that presents AI in a new visual way.

The key aspects of this project are that we:

  • used vector embeddings as raw material,
  • created a generative identity that can be applied to multiple mediums, and
  • reimagined what an identity could consist of, adding a data-driven layer to give it more meaning.

A key element that makes modern AI systems so capable are “embeddings”. Embeddings are numerical representations of digital content: objects like text, images, or audio get translated into mathematical forms according to certain characteristics they have, or categories they belong to. A large language model (LLM) can then determine how similar two pieces of content are, by calculating how close or far away the embeddings are positioned in the model’s “embedding space”.

Visualizing the embedding space provides us with a glimpse of how AI models process the data they work with; it reveals an in-between state of how prompted input is transformed into generated output. We asked: what would the embedding space of the Design & AI Symposium look like? By embedding relevant scientific publications, we created a dataset on which the visual identity could be built.

The organizers of the symposium provided us with a collection of documents and papers about design and AI. We split up and embedded these documents, creating 5500+ embeddings with 1536 dimensions each.

To create an embedding, we had a neural network analyze 2906 fragments of text from academic papers on design and AI, and score it on certain properties, which are called features. Depending on the model that is used, an embedding can consist of hundreds or thousands of different features.

In essence, embeddings are vectors: if we think of each feature as a dimension, the score of that feature allows us to position the embedding within that dimension. Putting all these dimensions together, we constructed the embedding space. Embeddings that are semantically similar have similar scores across all features, and are positioned closely together in the embedding space.

For humans, it’s impossible to think of a space made of a hundreds of dimensions. Using a mathematical method called dimensionality reduction, we projected this high-dimensional space onto three dimensions, making it possible to visualize the embedding space in a way that we can understand.

Once we had this three-dimensional space, we could visually map relationships and similarities between the embeddings. We envisioned what this space would look like if we were to inspect it with a microscope. The overlapping circles and varying colors depict clusters of closely related research, offering a unique perspective on the interconnectedness of academic knowledge. This visualization helps in understanding complex data relationships and reveals hidden patterns, creating a new visual language for AI.

Visually representing the embedding space with multiple embeddings scattered around in clusters, with each circle representing an embedding

Colorful and complex, embodying the high-dimensional nature of the embedding space

Highlighting relevant words and textual elements, to illustrate that the identity is driven by actual data

We reinforced the high-dimensional feeling of the visuals by creating grids consisting of many different perspectives.

Based on the desired dimensions of the visual, a grid is generated where each cell is a distinct view into the embedding space. The grid offers many different perspectives into the embedding space at once, emphasizing their high-dimensional nature.

Each visualization contains a main cell, in which certain embeddings are annotated with their position in the embedding space.

A title for the visual is generated based on the original text content of one of the visualized embeddings. The embedding on which the title of the poster is based contains a richer annotation, including the title of the publication and where it was published.

We built a custom visualization tool to quickly generate new expressions of the visual identity. Using the dataset of embeddings, the tool can generate full layouts, single cells or grids in multiple color schemes, with or without annotations. This way, the organizers of the symposium are not limited to a handful of poster designs, but rather can generate their own visuals for whatever format they need.

alt

Because the visual identity is based on a generative system, the dataset can be visualized in endless variations, enabling the creation of many unique expressions.

alt

With this AI-driven generative identity, we explored the role that AI can play in a design process to create an identity. By leveraging AI data and vector embeddings as creative materials, we were able to craft a distinctive and informative identity for the symposium. Through our work we aim to demystify AI for our audience, and illustrate its potential both as a tool and as inspiration.

<-Back to work overview
Photo of Thomas Clever

Get in touch

If you’d like to learn more about how we can add value to your business, contact Thomas Clever.

thomas@cleverfranke.com
+31 6 19 55 29 81

News about data, strategy, design and technology

Subscribe to our newsletter to receive quarterly updates.