Hyper Light Breaker cover
Hyper Light Breaker screenshot
Genre: Adventure, Indie

Hyper Light Breaker

We are now Arc Games

We're very excited to announce that we are now Arc Games! We're the same team of passionate gamers that brought you many beloved franchises including the Remnant series, Have a Nice Death, Star Trek Online and Neverwinter. And we can’t wait to bring you even more exciting titles like Gigantic: Rampage Edition, Hyper Light Breaker, and more soon-to-be announced games for 2025 and beyond! Follow us @ArcGames and check out the #iiishowcase April 10th, 10am PDT for a new Hyper Light Breaker trailer!

Follow us on Social Media:
Facebook.com/ArcGames

Youtube.com/@ArcGames

Hyper Light Breaker at Day of the Devs @ GDC

https://www.youtube.com/watch?v=izqBiM-qzJE

See you at the Midway
https://www.dayofthedevs.com/events/

Day of the Devs returns to its roots in San Francisco for the 2024 In-Person Celebration: San Francisco Edition!

All free with no costs either to developers or attendees. Just ask that you PLEASE RSVP.

Hyper Light Breaker: Pangea

Creating A Multiplayer Rogue-lite With Endless Open Worlds

We’ve shared extensively about our tech art strategies and proc gen processes in both a recent Heart to Heart with Len White and Christian Sparks as well as on our dev blog. We’ve also discussed our environment art works in progress in a different Heart to Heart with Will Tate and on our blog as well.

That was all months ago; often, a few months can mean a lifetime in game development.

Original Vision

Years ago now, when (Alx) was ideating on the design pillars of the game, the question that came to mind was “what would you do in an open world you’d never seed again if you die?”

With that in mind, we made decisions in the early days of our game to try for a more reasonable approach of this idea as we built our systems, since it seemed insurmountable. So, we created an adjacent version, something that captured parts of this design ideal. We had a large, open biomes, but they were segmented in a stage-by-stage format to make it more feasible for us to build.

Over the course of development, we found that, as we continued to build the technology needed for these smaller open-biomes, that we could actually leverage the tools to make the original vision a reality. Thus, we shifted away from the more limiting and (ironically) more complex version of a stage-to-stage progression, and started on a “Pangaea Shift”.

The Shift



Pangaea is used as a code name, as we were essentially merging all of our stages into one larger map to create an open world.

This shift meant that we would mean that we lose some time up front reconfiguring some parts of the game to function in the ne structure, but gain time on the backend and a much more exciting game format to dive into. We were excited and scared, all at once.

This shift yields us:

  • Highly differentiated points of interest on a global scale, resulting in entirely new biomes to explore instead of sub-variations of the same biomes
  • Reduced per-level workload for Houdini, focusing on simpler, bolder biome elements since the context of other biomes being present shifted the dynamics of play so significantly
  • The ability to generate dynamic, global components that affect the whole run / playthrough, rather than just stage or biome-specific elements, opening up tons of exciting mechanics
  • A truly open world, procedurally generated, with biomes juxtaposed seamlessly on the same map

An open world you’ll only see once

It’s a thought that leads to a lot of questions and exciting ideas. How much do I explore this world?
How much time do I invest, knowing I could die at any turn? What are the pressures driving me forward in this world? What’s new, exciting, different this time? What’s coming next?

These are all questions we ask and answer for development, and ones we are excited for you all to see the conclusions of for yourselves in Early Access and beyond!

Wrap Up

What do you think of our process shift? Share your feedback!

See you next time, Breakers!

-The Heart Machine Team

Hyper Light Breaker: Production Process

Check out our latest stream about our production process behind Hyper Light Breaker, with Senior Producer Lesley Mathieson.


Some takeaways from the December 9th Heart to Heart stream:

  • Our approach to production tools and process has ever been “I want people to feel it's pretty to use, it's obvious, and they don't have to think about it too much.” - Lesley Mathieson.
  • “The least amount of friction is the most important thing when it comes to getting people to use tools consistently. Even if it's a janky system, what we really need is for people to consistently look at what's going on.” - Alx Preston
  • Some of the tools we use are:
  • Tom’s Planner, an online Gantt chart maker
  • HacknPlan, a game design project management tool
  • MantisHub, a bug and issue tracker

Hyper Light Breaker: Animation

Our wonderful Lead Animator, Chris Bullock, shares some in-depth info on our animation process and what’s involved in getting our Leaper to this point:



What is Rigging?

Rigging is the process by which we take a 3D model and give it the ability to deform over time.

Most often this is done by giving it a virtual skeleton (or armature in sculpture terminology), and then attaching controls that allow the animators to move the skeleton, almost like strings on a marionette. But now there are also other techniques that have been used in Film, TV, and Commercial work for years that are starting to make their way to games. For example: Blendshape deformations are where the 3D model’s deformation is sculpted manually, then blended between the base model and the Blendshape(s).

The Bone Rig / Skeleton Hierarchy


Figure 1: Here, we can see the character mesh’s points (vertices) in magenta, edges in dark blue, and then the polygons shaded in gray.

As we can see, here, a character’s mesh is made up of a series of points (a.k.a. vertices) as the fundamental building blocks (magenta dots in above image), along with edges that connect the points, which are then filled-in with polygons. By adding a hierarchy of special objects called “bones” (or “joints” as they are, technically, more accurately called in some software packages) that often roughly resemble an actual skeleton for the character or creature, we’re able to get the model to deform and animate without having to move every single point on the model by hand every frame of an animation. It’s easier to move a few dozen to a few hundred bones on the character than it is to animate tens-of-thousands to millions of points on the mesh.

In order to do this, we need to tell each point on the model which bones it should inherit its movement from, and how much influence each bone has on that vertex. There’s a lot of math that goes into how these transformations are actually carried out, but fortunately, we have tools at our disposal that we don’t need to assign all of this data one point at a time, which actually makes this “skinning” process more of an artistic process than a technical one—defining which parts of the model move together, to give it more of a solid feel, and which parts have a “softer”/”fleshier” feel to them.


Figure 2: Here we can see our bone objects in yellow and magenta along with our base mesh.

The Animation Control Rig

To make things even simpler to animate the bones, rigging will often involve adding Animation Controls, which allows animators to manipulate a series or collection of bones together as a single “system”, or to isolate movement of a bone in a non-hierarchical manner, or in a different manner from the way the bone hierarchy was setup (more on this below). The time saved during animation, when multiplied across a team of animators and total number of animations needed, well offsets the extra time it takes to setup this animation control rig.

Currently, this is most often being done in separate software from the game engine: software such as: Blender, Autodesk Maya, Autodesk Motionbuilder, Autodesk 3D Studio Max, SideFX Houdini, etc. Game engines, such as Unreal Engine 5, are starting to allow the Animation Control Rig to be created directly inside their own editors.


Figure 3: Here, we can see the animation controls (in blue, red, cyan, magenta, orange, and yellow) along with the base mesh and skeleton.

FK – Forward Kinematics – We use this term when talking about a collection of objects being manipulated in a direct parent-child hierarchical manner. Let’s say we have two objects: Object A and Object B. Let’s say that Object A is Object B’s “parent”, and Object B is Object A’s “child”. This means that whenever Object A is moved, Object B will move along with it, keeping the same offset from Object A as it did from the start. However, when Object B is moved, it has no effect on Object A’s position in the scene. We can see this demonstrated in the video below with the Red chain of bones. You can see how as each object in the chain is selected and manipulated, it only affects the objects below in the hierarchy. This is the way that the character’s skeleton would animate if we had no Animation Control rig. So, moving the hips would mean that we would have to move the limbs the opposite amount if we wanted them to stay planted while the hips move. This is called counter-animating, and is something we will often go through great lengths to avoid doing.

IK – Inverse Kinematics – With body parts such as the arms and the legs, we often find it easier to deal directly with the positioning of the ends of the chain of bones, and want any bones that are between the ends to automatically bend in order to achieve the positional goals of the end bones. This is done through a computation technique known as IK. This is demonstrated with the blue chain of bones in the video, below. Notice how we have two control objects that we manipulate directly, and they indirectly control the chain of bones.



These are just the two most common of the myriad of ways of controlling the way that something animates. The number of ways that objects can be controlled are near limitless, and new techniques are being discovered/invented all the time.

The Game Engine

Once we have our character modeled, rigged, and animated, we need to get all that data into the game engine, somehow. This is done through an export/import process. Typically, the character mesh and skeleton data is exported separately from the animation data. The character’s mesh data and the skeleton (including the skinning relationship) typically get exported together. In our animation source asset files, any animation data is stored on the Animation Controls. However, the only thing that the game engine cares about is the animation that’s applied to the skeleton hierarchy—it does not care about the Animation Control data at all. So, often what happens is that the animation gets “baked” to the skeleton as a part of the export process—i.e. it just sets a keyframe for every bone in the skeleton hierarchy for every frame in the animation timeline. In the game engine, the animation data is then re-applied to the skeleton, which, in turn gets applied to the mesh, which makes our character move, finally!

We’re also able to setup relationships between bones inside the game engine, which allow us to drive the motion of certain bones based on the movement of other bones. In Figure 2, for instance, the bones in magenta are controlled by the bones in yellow. The reason we do this in the game engine, instead of animating them, and then exporting them, is to allow those bones, specifically, to react to the way the character behaves in-game, rather than adhering to prescriptive movement (a.k.a. “canned animation”)—this helps makes everything feel a lot more alive and reactive.

Wrap Up

This covers some of the basics of the rigging process, and why it’s so important in a modern game production pipeline. Without this process, there is no way we would see the quality of deformation and animation in the games we love.

Let us know…

What do you think of the animation / rigging process as a whole? Is it art? Is it math? Does it seem fascinating or boring?

Heart 2 Heart: Hyper Light Breaker Animation w/ Chris Bullock + Sean Ward



Some highlights from the August 12th Heart to Heart stream:
(thoughtfully compiled by Polare)

  • An early animatic of the HLB reveal trailer
  • Character animations for Hyper Light Breaker and Solar Ash
  • The sword (being used by the promotional Blu character) is a "base weapon"
  • Bosses will be big, but not Solar Ash big
  • Jar Jar Binks (from Star Wars) means a lot to Alx
  • We are using Autodesk Maya for animation software
  • In Hyper Light Breaker, the companion floats around with your health bar

Watch this space for a more in-depth blog post on animation, coming soon!

Meet Blu

The Northern Realms are brutal. They’re cold and harsh and demanding.

This is a species (nicknamed “Blu”) from the mountainous north, where the roughness of existence leaves them generally well-suited for action and adventure. It’s a natural step for them to become Breakers.



Species, you say?

…now would be a good time to mention that we will have a limited form of CHARACTER CREATION in Hyper Light Breaker!

WHY CHARACTER CREATION?



Originally, we were planning on creating discrete characters for Hyper Light Breaker. This character had a backstory and a somewhat rigid, predetermined personality and play style.

As we continued development however, our animation department advocated that it would both be far more efficient and lend flexibility to the player experience to instead introduce character creation into the game. We converted the character of “Blu” to a species.

As our Animation Lead, Chris Bullock, puts it, “we decided to have one character "archetype", with a single, larger set of animations that could use any combination of weapons, in order to separate the gameplay from the look a little more. The hope was that instead of doing 5 sets of animations, thus requiring one for each character class where each character had a smaller set of animations that they needed, we could reduce animation scope down significantly by sticking with the single archetype.”

ABOUT BLU



They’re sinewy and fierce, careful but playful and fun, and very clever.

Our Character Concept Artist, Isaak Ramos, worked closely with Alx to develop out facial expressions, outfits, looks, and poses for this species.

Take a look at these original concepts from Alx:



EVO”BLU”TION - OUR CONCEPT ART PROCESS





Based on Alx’s initial concept art, Isaak makes adjustments and explorations, expanding on the original concepts. He shares a few key components and considerations:


  • EXPLORATION: Alx will let me know if he's happy with where his initial sketches are or if he's wanting to explore more directions. In the case of the Leaper, for instance, my main objectives were to flesh out the forms since Alx's concept was already on the money. With Blu, there was a solid foundation to jump from, but we still wanted to explore and solve some important elements to the design.
  • STYLE: With Blu, we wanted to solve the top by going with a biker jacket or trench coat. I tried some poncho-looking garb, something in the direction of Sergio Leone character. Those kinds of wearables present their own technical obstacles, so we shifted to something more manageable. The short biker jacket came about as I shifted to thinking of a character that was more nimble and athletic. Something along the lines of Canti's jacket (FLCL) with a Han Solo mood.
  • POSES + REFERENCES: As production goes along, the poses become more standard as I get the rad sculpts in from John DeRiggi and Jack Covell (character artists) to draw over. As much as I like figuring out poses, it’s better to draw over the approved proportions for the playable characters to maintain continuity and speed. For NPCs and Humanoid Enemies, my pose reference generally comes from fashion models. For the sketches and gestures, sometimes I'll go in without a reference, or I'll browse my personal library of references that I've gathered over the years. There are so many pose resources out there now. Weapon references range from museum display images to blocking things out in 3d. Outfit references usually come from a 500 hour Pinterest deep dive, ha!
  • ITERATING: Alx will go over his initial design and lore thoughts with me, so I've always got a good direction to go on from there. The rest of a character's vibe will flesh out in my head as I gather references. Blu's vibe shifted as our design goals called for different references, for instance. My personal view of the character’s attitude shifted from swift and stoic to nimble and determined as we went along. I imagined a blend of Trinity (Matrix) and Driver (Drive) as I worked on the later concepts.
  • FEEDBACK: We've formed a great pipeline where I can get solid feedback from the character centric departments. Part of that process involves me checking in with John DeRiggi (Lead Character Artist), and he's been a rock for me as we check in daily. Alx and I have always overlapped a good amount with our tastes, so a momentum is always sustained. Feedback from design and animation is always crucial too... it's all a team effort. Every concept is the culmination of good ideas and notes from across the board!






Fascinated by the concept art part of our dev process? Check out our previous piece where John DeRiggi shares our character art process! Or stay tuned for more :)

LET US HEAR FROM YOU!



Are you excited about the shift to character creation?

What do you think of this species and all their varying looks?

Procedural Generation + HyperDec

HyperDec - Intro





Originally, before it was called HyperDec, the procedural “decking” system was built out to be able to evaluate the height of terrain at a given XY position & procedurally populate those spaces with props, using seed-informed deterministic random value selections for things like position, rotation, and scale, as well as parametric variation for things like space between props, maximum count, height and slope ranges, spawn area, etc.

From there, we wanted to explore applying artistic intentionality with props/clusters of props, being able to define “child spawns” that would anchor themselves around spawned instances. Pieces had filters for what kinds of surfaces they could and couldn’t spawn on, as well as custom avoidance filters and world-aligned texture masks, so users could parameterize relational behaviors between types of props, all of which were piped into a global HISM array.





After proving out simply laying out these pieces & giving them relational considerations, we moved onto zone targeting. In addition to randomized terrain on each run (more on terrain from Len) we wanted to have distinctive zones with unique props in each. Thanks to some very clever programming from Peter Hastings, Senior Gameplay Engineer, we were able to very efficiently read zone data encoded into world-aligned textures, and filter placement accordingly.





Artists and designers could create Data-Only-Blueprint assets that would contain primary and secondary assets to spawn, and their parameters for placement on the terrain. This workflow of randomized terrain with zone identifications became the foundation of our procedural decking paradigm.

Initially, this paradigm worked out well. But over time, we ran into issues when trying to implement at scale.

A Setback



The implementation we had started to run into issues as it continued to grow. Rather than only placing static props using this system, we began utilizing it for placement of gameplay objects, applying more robust filtering for things like flatness detection, and our evaluation of terrain was happening at runtime per-prop, with prop counts getting up into the 70K - 100K range, which meant that the startup time for each run took longer and longer.

We also ran into issues with balancing density & variation with replication for multiplayer; all of these tens of thousands of objects needed to consistently show up on every player’s instance. Having all procedural placement done on the server and then passing that enormous amount of data to players on begin play was unfeasible, and so instead we would only have the server spawn gameplay relevant pieces, and then each connected client would receive a seed number from the server to feed into the client-side placement of props. Utilizing the same seed across all clients meant that even though they were spawning objects locally, they would all spawn with the same transforms informed by the seed.

While we were able to achieve a satisfying amount of variation and distinction, it became clear that the increasing generation time wouldn’t be sustainable long-term.

Rethinking Our Design Paradigm



Tech Art & Engineering sat down and re-thought our design paradigm for procedurally generated content in the game, and wound up completely re-working our implementation from the ground up.

We were able to move away from a solely-blueprint-driven pipeline for procedural decking, leveraging faster C++ execution, thanks to some awesome effort put in by Justin Beales, Senior Gameplay Engineer. We also moved the per-prop terrain evaluation from runtime to design-time. This allowed us to pre-determine placement of objects and then feed very simple data into a runtime system that grabbed the objects and their intended locations and place them accordingly. Each stage’s variants would have coinciding data to reference, and using a DataTable to layout objects & parameters, we could “pre-bake” candidate points for each object type in the editor, and then save that data for quick reference on begin play. So while there are a limited number of variants as a whole, the selection of candidate points from the list could be randomized with a seed, meaning that the same variant could have unique prop/gameplay layouts every time.



Now that we had generation in a better spot, we set out to expand on the artistic intentionality of the pieces being spawned. It became clear over time that the use of anchor-clustering & avoidance distances would not be enough to make these levels look less like math in action and more like art. This idea and conversation led to the creation of HyperFabs, which are spawned just like regular props via HyperDec, but have some more advanced logic & artistic implications.

HyperFabs



HyperFabs take the concept of Prefabs (prop or object arrangements saved as a new object for re-use) and add some additional utility & proceduralism to them.

The overall idea is that artists can lay out arrangements/mesh assemblies, that are intended to represent a small part of what would normally be a hand-decorated level. They then can use a custom script we’ve built to store those meshes in a generated Blueprint asset, that can then be placed on the terrain. The center point of the actor will align to the terrain, but then based on rules exposed that artists can tweak and assign to components/groups of components using Tags, the individual pieces in the HyperFabs will also conform to the terrain surrounding the actor’s center point in the world. It takes our original idea of relational spawning, but allows artists to lay out these relations through traditional level design tools instead of strictly through DataTable parameters.


A boulder assembly turned into a HyperFab, made by Will in Enviro

It doesnt have to just be for small arrangements though; entire city blocks have been baked into a HyperFab, which conforms to varying terrain as expected.


A city block assembly turned into a HyperFab, made by Wolf in Enviro

The script for baking HyperFabs from mesh assemblies is smart enough to know when to use static mesh components versus mesh instancing, and it also has a utility to merge stacked/co-dependent objects into new static mesh assets, which helps with performance & automation.

Other cool bits



Shoreline Generation



A neato bit of tech I worked on before we used terrain variants was shoreline generation. Since terrain was being generated using a voxel system, each playthrough generated terrain that was completely random. (But also much harder to control/make look nice like our new approach!) This meant that we couldn’t pre-determine shoreline placement, either through splines, decals, or shader stuff.

After a bit of research, I learned about Jump Flooding, which is an algorithm that can generate a distance texture between bits of data in a texture in UV space. In the case of shorelines, I captured an intersection range of the terrain, and used that as a base-mask. That mask was then jump-flooded to give us a gradient, which could be fed into the UV channel of a basic waves-mask texture that ran perpendicular to the wave lines direction. Using some additional time math and noise modulation, waves could pan along that distance gradient, with shape and body breakup, controls for distance-from-shore, wave count, and initial capture depth.



Flatness Detection



Another challenge we ran into for procedural placement was flatness-range detection; some objects had platform-like bases that needed an acceptable range of flatness so that they weren’t perched awkwardly on the side of a cliff or floating on little bumps in the terrain. The first iteration for flatness detection used traces from randomly selected points in a grid formation, comparing the height offset averages, allowing for a variable number of failure tolerances and grid resolution, before determining if a point was flat enough.



While this approach did find flat areas, it was costly & prone to prolonged searching resulting in a block in the generation process while platforms found their place. After we moved candidate point determination to design time, we reworked the search function to use the terrain point data in a similar grid-check fashion, using grid space partioning to speed up the referencing of bulk points, which led to this fun little clip of the proof-of concept, showing an object finding approximate-nearest-neighbors with no collision/overlap checks, just location data.



While this did divert the computational cost of determining flatness over distance from runtime to design time, it was still very slow and presented a blocker to design & environment when pre-baking asset candidate points. After a bit of research, jump-flooding came to the rescue again.

The workflow for flatness-range detection works in a series of steps. First you get a normal-map of the terrain you’re evaluating, and mask it by slope, with anything being below a configurable slope value being flat, and anything above it being too steep.


White areas are flat, black areas are too steep or below shoreline height

We then invert this output to provide a sort of “cavity mask” of areas that were flat enough for placement. But we needed to be able to define how far from the nearest non-flat area a point was, so that we didn’t pick a point that was flat enough at that point, but not flat over the range that equaled the size/footprint of the object we were searching for. To solve for this, we jumpflood that slope/cavity mask, and then transpose the 0-1 values represented in the output-textures’ UV space into the world-space equivalent, based on the size of the terrain. This gave us a distance mask that we could then threshold, returning us to the yes-or-no mask configuration that could be read at each point-evaluation.




Because all of these steps are running with shader calculations instead of collision queries or trace loops, the time to find flat-range points for assets decreased so much that the generation time is nearly indistinguishable when baking points with and without flatness checks. Yay shaders! Here are some fun gifs of the values for distance & slope being changed when creating a flatness mask.



Breaker Terrain Generation Basics



The Hyperdec terrain process generates the foundational landscapes upon which all other art and gameplay assets are placed. The ideal for a rogue-like would be that every run of every stage is completely unique in decking AND landscape. However, pretty early on we ruled out completely procedural landscape generation simply because of the R&D time it would have entailed. We also had the notion that our gameplay would require some very tight control over the kinds of hills, valleys, waterways, and other landscape features that emerged. In a fully procedural landscape system, running in seconds as a player waited, we might get landscapes that just plain broke gameplay at times; this was unacceptable. So we went with semi-procedural.

Our approach is to generate a large, though not infinite, collection of terrain meshes off-line that, when combined with our highly randomized Hyperdecking system, can give the impression of limitless, fresh gameplay spaces every time you play. Initially we explored voxel-based terrain, since it was an artist-friendly way to quickly build interesting terrain shapes. This was eventually abandoned as the run-time implications of voxels were an unknown and we didn’t have the R&D time available to ensure their success.

Work continued with algorithmic island generation spearheaded by Peter Hastings. Many of the features present in this early work exist in our current terrain as well.


Procedural Island Generation, Peter Hastings

At some point it was clear that iteration and experimentation would put serious strains on the purely algorithmic approach. This led to adopting the procedural tool Houdini as the master terrain generator. This was especially useful since we could directly translate all the algorithmic work into Houdini and then further refine the topology in later parts of the Houdini network. First algorithms were directly re-written in python and then later in Houdini’s native Vex language for speed. Further, Houdini is effective at generating lots of procedural variations once a network is generating solid results on a smaller scale. Our goal is to have at least 100 variations of each stage to draw from during play and using Houdini allows a single network to drive all variations.


A bird’s eye view of a Houdini sub-network generating a component of the terrain


One of the current terrain variants for one stage, without any hyperdecking

For many of our stages each terrain is effectively an island that’s composed of sub-islands which are each assigned a “Zone”. A Zone is basically like a biome in that it is intended to have a look and feel clearly distinct from other zones. They are intended to look good but also help the player navigate and get their bearings as they move around the play space. In order to provide these features in every terrain variant a combination of noise fields and specific scattering of necessary topological features occurs in the Houdini network. Each stage has a different take on this basic formula and R&D is ongoing on how to get more compelling, interesting caves, hills, nooks and crannies without creating game-breaking problems (like inescapable pits, for example).


Visualizing a walk through the Houdini processing chain that converts a circle into terrain.

The animated image above shows one processing chain that starts with a basic circle geometry delineating the overall footprint of the island then, via a chain of surface operators, eventually ends up as playable terrain. Many of the operations involve random noise that contributes to the differences between variations. Both Houdini height fields (2D volumes) and mesh operators are employed at different points to achieve different results. The initial circle is distorted then fractured to yield the basis of a controllable number of separate sub-islands. Signed distance fields are calculated from the water’s edge (z=0) to produce the underwater-to-beach transition slopes. More specific mesa-type shapes are scatter-projected into the height field to yield controllable topology that plays well compared to purely noise-generated displacements. In the final section, geometry is projected at the boundary area into the height field as a mask, distorted via noise fields and displaced to create the stage’s outer perimeter. The full chain of operations can generate a large number of unique terrains that all exist within constraints set out by game design.

Another feature that exploits the fact that our terrains are not pure height fields is cave tunnels and caverns. These are generated as distorted tube-like meshes that are then subtracted from a volume representation of the above mesh. We are excited to push cave-tech (tm) in the future to generate some interesting areas for discovery for the player.

Unfortunately, to produce production quality terrains the resolution of the resulting mesh needs to also increase, which is starting to slow Houdini down compared to the early days when everything processed so briskly. These are relatively large meshes which are getting converted back and forth between mesh, height field, and voxel representations to get the job done. As production moves forward and we start generating all the variants needed for gameplay the plan is to offline processing to a nightly job on a build machine so no one has to sit at their screen for hours watching the wheel spin.

Articles & Sources:



Jump Flood Process in UE4:
https://www.froyok.fr/blog/2018-11-realtime-distance-field-textures-in-unreal-engine-4/

Flatness Detection Abstract:
https://gamedev.stackexchange.com/questions/125902/is-there-an-existing-algorithm-to-find-suitable-locations-to-place-a-town-on-a-h

Grid Space Partition Process:
https://gameprogrammingpatterns.com/spatial-partition.html#:~:text=Our grid example partitioned space,contains many objects%2C it's subdivided.

Wrap Up



As you can see, our team has spent a considerable effort executing on thoughtful procedural generation in order to make the flow of game levels feel coherent and intentional.

Want more stuff about procedural generation? Len also did this talk on tech art in Solar Ash!

Let Us Hear From You!



What do you think of what you’ve seen (and heard) so far?

Are you a tech artist or aspiring to be one? How would you have tackled these issues?

Meet Melee Wretch & Leaper

Meet Melee Wretch





Character Art by John DeRiggi

Wretches are monstrous mutated soldiers.



Original Concept Art by Alx Preston

Meet Leaper





Character Art by Jack Covell

Leapers are rare prototype soldiers who have undergone body modification experiments.



(top) Original Concept Art by Alx Preston; (bottom) Final Concept Art by Isaak Ramos

Our Character Art Process + Inspirations



John DeRiggi, Lead Character Artist, shares a bit about the character art process:

Heart Machine has a history of creating vibrant, colorful worlds that often deviate from current games. True to this goal, Hyper Light Breaker characters are inspired by traditional cel animation like Miyazaki and Studio Ghibli combined with a watercolor painting approach. Hopefully you can see this in the concepts and 3D models of the Melee Wretch and Leaper enemies.

A key ingredient here is the character’s material and its reaction to light. Games can sometimes use materials included with a game engine but often a custom material is needed to achieve the game’s artistic vision. Since graphics programmers and technical artists create the code behind materials, a custom material from scratch requires their time.

Because we are still a smaller studio, our technical resources are often constrained, and we could not devote this larger chunk of custom material time on Breaker. We are therefore using a new material on the Unreal Engine Marketplace, called Stylized Rendering System. This gives us the base for our cel-shaded look in various light and shadow conditions.

Our character art team can then customize this material and create our cel-shaded, watercolor look with a combination of hand-painted and procedural textures in Substance 3D Painter. This tool allows us to paint like traditional artists in 3 dimensions on a sculpture but do so digitally in our intended style goals for Breaker. When these textures combine with the cel-shaded material properties, we are able to achieve a really fun result!

What’s up Next?



On the rigging / animation side, we’ll be sharing soon what Chris Bullock, Lead Animator, and his team worked based on these characters, all the decisions and trade-offs that had to be considered. Stay tuned!

Heart to Heart w/ Will Tate: Environment Art for Hyper Light Breaker

Alx sat down with Will Tate, Lead Environment Artist on Hyper Light Breaker, to talk about game art, careers in environment art, and more!




  • Environments will have day/night lighting cycles
  • Hyper Light Breaker became an idea before HL was done
  • Winter area was confirmed (winter areas are also Alx’s favorite kind of area)


Plus some previews of the hub, and stage 1 and 2 of the world: