A new visual language for an AI-driven identity

Client
Eindhoven University of Technology
Deliverables
Data-driven branding
Interactive tool

The challenge

As AI becomes more and more popular, it's becoming increasingly challenging to steer clear of visual clichés when discussing these tools. When CLEVER°FRANKE was asked to design the visual identity of the upcoming Design & AI Symposium hosted by Eindhoven University of Technology, we wanted to find a unique way to represent AI in our identity design. Our aim was to develop a new visual language for AI using real data to reveal and explain the complexity of modern AI systems.

Value delivered

We created an AI-driven generative identity that not only provides the symposium with unified visuals, but also captivates and informs its diverse audience. Based on a dataset of academic papers on design and AI, the visual language we developed transforms the complexity of AI tools like large language models into a compelling visual narrative that elevates the entire conference experience.

alt

Background

The Design & AI Symposium is a yearly event organized by the departments of industrial design at Eindhoven University of Technology and Delft University of Technology. At the symposium, leading academics, designers and engineers meet to share and explore new perspectives on the latest developments in AI in their respective fields.

CLEVER°FRANKE was asked to design a visual identity that reflected the symposium’s unique perspective. By developing custom software and tools to visualize the underlying complexity of large language models, we created a generative identity that presents AI in a new visual way.

Ourprocess

The key aspects of this project are that we:

  • used vector embeddings as raw material,
  • created a generative identity that can be applied to multiple mediums, and
  • reimagined what an identity could consist of, adding a data-driven layer to give it more meaning.

Dataanalysis

Textembeddings

A key element that makes modern AI systems so capable are “embeddings”. Embeddings are numerical representations of digital content: objects like text, images, or audio get translated into mathematical forms according to certain characteristics they have, or categories they belong to. A large language model (LLM) can then determine how similar two pieces of content are, by calculating how close or far away the embeddings are positioned in the model’s “embedding space”.

Visualizing the embedding space provides us with a glimpse of how AI models process the data they work with; it reveals an in-between state of how prompted input is transformed into generated output. We asked: what would the embedding space of the Design & AI Symposium look like? By embedding relevant scientific publications, we created a dataset on which the visual identity could be built.

Design&AIembeddings

The organizers of the symposium provided us with a collection of documents and papers about design and AI. We split up and embedded these documents, creating 5500+ embeddings with 1536 dimensions each.

To create an embedding, we had a neural network analyze 2906 fragments of text from academic papers on design and AI, and score it on certain properties, which are called features. Depending on the model that is used, an embedding can consist of hundreds or thousands of different features.

In essence, embeddings are vectors: if we think of each feature as a dimension, the score of that feature allows us to position the embedding within that dimension. Putting all these dimensions together, we constructed the embedding space. Embeddings that are semantically similar have similar scores across all features, and are positioned closely together in the embedding space.

For humans, it’s impossible to think of a space made of a hundreds of dimensions. Using a mathematical method called dimensionality reduction, we projected this high-dimensional space onto three dimensions, making it possible to visualize the embedding space in a way that we can understand.

Visualstyle

Once we had this three-dimensional space, we could visually map relationships and similarities between the embeddings. We envisioned what this space would look like if we were to inspect it with a microscope. The overlapping circles and varying colors depict clusters of closely related research, offering a unique perspective on the interconnectedness of academic knowledge. This visualization helps in understanding complex data relationships and reveals hidden patterns, creating a new visual language for AI.

Embeddingspace

Visually representing the embedding space with multiple embeddings scattered around in clusters, with each circle representing an embedding

Boldvisuals

Colorful and complex, embodying the high-dimensional nature of the embedding space

Detailedcallouts

Highlighting relevant words and textual elements, to illustrate that the identity is driven by actual data

Thegrid

We reinforced the high-dimensional feeling of the visuals by creating grids consisting of many different perspectives.

Gridlayout

Based on the desired dimensions of the visual, a grid is generated where each cell is a distinct view into the embedding space. The grid offers many different perspectives into the embedding space at once, emphasizing their high-dimensional nature.

Annotations

Each visualization contains a main cell, in which certain embeddings are annotated with their position in the embedding space.

Generatedtitle

A title for the visual is generated based on the original text content of one of the visualized embeddings. The embedding on which the title of the poster is based contains a richer annotation, including the title of the publication and where it was published.

Generativetool

We built a custom visualization tool to quickly generate new expressions of the visual identity. Using the dataset of embeddings, the tool can generate full layouts, single cells or grids in multiple color schemes, with or without annotations. This way, the organizers of the symposium are not limited to a handful of poster designs, but rather can generate their own visuals for whatever format they need.

alt

Technicalexplanation

Inputdata

Academic papers on design and AI

Our dataset consists out of 40 academic publications about the topics design and AI. The papers were redacted to omit information such as titles, authors, and references, leaving only the content of the publications.

Text extraction (AI)
  • Optical character recognition (OCR)
  • Tesseract (Python)

Using OCR, the text content of each publication was extracted as a single, continuous string, regardless of file format and layout.

Textsplitting

Langchain (Python)

The textual content of each publication was split up into smaller chunks of a fixed size of 200 characters each, resulting in a total of 2906 chunks of text.


The fixed size of 200 characters created sufficient variety in content between the chunks, while ensuring that each chunk contains enough text to be semantically relevant.

Labeling(AI)

GPT-3.5 Turbo (OpenAI)

The large language model GPT-3.5 turbo was used to process the textual content of each chunk and generate a new label that summarized the essence of its content.

Embedding(AI)

Text-embedding-3-small model (OpenAI)

Using OpenAI’s Text-embedding-3-model, an embedding of each chunk was created, each a vector that described 1536 features about the semantic meaning of the text.

Dimensionalityreduction(AI)
  • t-SNE
  • scikit-learn (Python)

t-SNE was used to project the 1536 features of each embedding down to three dimensions. By analyzing the full dataset of 2906 embeddings, t-SNE learned the optimal way of representing the embeddings in three-dimensional space while losing the least amount of information.

Identityinuse

Because the visual identity is based on a generative system, the dataset can be visualized in endless variations, enabling the creation of many unique expressions.

alt

Results

With this AI-driven generative identity, we explored the role that AI can play in a design process to create an identity. By leveraging AI data and vector embeddings as creative materials, we were able to craft a distinctive and informative identity for the symposium. Through our work we aim to demystify AI for our audience, and illustrate its potential both as a tool and as inspiration.

<-Back to work overview
Photo of Thomas Clever

Get in touch

If you’d like to learn more about how we can add value to your business, contact Thomas Clever.

thomas@cleverfranke.com
+31 6 19 55 29 81

News about data, strategy, design and technology

Subscribe to our newsletter to receive quarterly updates.