AI and ML will change the game for templates
brand automationgenerative designbusinessConstraint-based templates can be expensive and hard to adapt.
Generative AI is cheaper, faster, and more intuitive.
This extrapolative approach to templates promises a way forward without the homogeneity!
Constraints-based templating #
In templates constraints answer the question:
what must stay the same so that everything else can change?
Typically this templating happens when a person crafts a design. They lock down things and permit changes with safe limits. For example, taking an existing design, and only allowing one area be editable: no font changes, these word limits, select from predefined choices of photography, with the layout only adapting slightly. It’s a great way to extend the reach of predetermined content but nearly guarantees sameish, unengaging outcomes en masse.
Trained extrapolation (aka AI+ML) #
Trained generative models answer the question:
What invisible rules can we decipher from the past that may adapt well in the future?
Learning what to do next rather than setting the rules out ahead of time is a dramatically different approach. One that makes room for truly adaptive and dynamic outcomes.
Generative extrapolation is performed by first being seeded with a model of successful content. It is then prompted by constraints specific to an outcome.
That prompt can even be a visual of a good ol fashioned template with some desired text too!
In theory, this technique is forward compatible with existing templates. Just train a model on the output you've been living with and rules can largely be learned then followed.
Why is this a game changer? #
It was Lawrence Lessig who said:
Creativity & innovation always builds on the past. The past always tries to control the creativity that builds upon it.
Lessig was of course talking about needed reforms in copyright law; however his statement has become valid when considering the breakthroughs we’re looking at today.
When the models are informed by successful contributions of the past - the future generated outcomes are heavily influenced - but not limited to it.
Design-wise this solves a lot more gotcha’s than any one template could reasonably make.
For example:
- Text layout
- Image treatments
- Visual hierarchy
- Illustrative elements
Are all entire graphic design disciplines with a countless subset of known rules that should be enforced.
Coding a template to perform accurate optical alignment is costly, time consuming, and use-case specific.
Training a model to learn from already optically aligned content: far less so.
There’s also tooling limitations of traditional templates to consider.
An example #
Take logo-balancing for example. You know those strips of logos you see on websites or posters to show sponsors or customers?
Setting min/max size boundaries for a placeholder in any templating tool cannot account for the visual weight of an unknown future graphic within. Small marks with thick lines could appear heavier than large ones with thin lines, A well trained model could intuitively place and scale several marks so long as it’s seen enough examples where real artists have done the same.
Coming up #
In future I’m going to dive deeper into what I see as today’s front running stack for content extrapolation. From LLM’s to GAN’s. With use cases ranging from the simple: like font detection. To the incredible image-aware extrapolation technologies, like ControlNet which deserves it's own post. There’s so much to unpack and it’s moving at break-neck speed.
So bookmark this page or subscribe to my feed