Why and how to optimize 3D data?
3 December 2021
The creation of 3D applications requires the use and processing of data. Our customers may have some of this data, in various formats.
This raises the following questions: Can the customer data be used directly; does it require a processing phase, or is it unusable as such?
To answer these questions, the following two criteria must be considered:
- Is the application a real-time application?
- What is the target system (a PC, a virtual or augmented reality headset, a tablet/smartphone, the web)?
The answer to these two questions will define the shape of the data in the application: the size/weight of the data, the number of polygons in the data, the format, etc.
Nowadays, the models provided by most customers are derived from “surface” software products. The surface modeling is mainly aimed at the design stages of the product, where the emphasis is placed more on the aesthetics and style than on the technical aspects of the product. It has indeed the required qualities to have a better visualization and presentation, the objects appearing more realistic.
But these data are not usable to achieve most of the customer requests because of the lack of optimization. Therefore, the 3D graphic designer team (re)models from these 3D data to optimize them.
So here we introduce, in a quick explanation, why it is important to optimize the data and how to do it.
Why optimize 3D CAD data?
The 3D models provided are rather “dense”, i.e. they have several million polygons (mostly triangles, but sometimes more complex shapes). This is particularly due to the precise measurements, the aesthetic and technical details.
However, the weight of these polygons can weigh down or even hinder the final experience (web configurator, 3D/VR/AR experience, interactive showroom etc.) if they are not optimized. Indeed, the more polygons there are, the greedier the light and computational load will be, and the slower the simulation will be.
The goals of optimization are twofold:
- To reduce the amount of computational load required,
- To improve the visual aspect of the final experience.
So how does the 3D graphic designer team at Light And Shadows optimize 3D data?
First and foremost, the 3D graphic designer team will apply what is called topology optimization or retopology. This process aims to significantly reduce the weight of a 3D model by shrinking the number of polygons.
At the beginning, we have a model, called high poly, a high-definition model holding an incredibly large number of polygons that can go up to several tens of millions of polygons. Retopology will then remove the unnecessary material so that only the essential shapes remain, without altering the properties and characteristics (template, surface, etc.) and the quality of the 3D model. The result of this first optimization step is a so-called low poly model.
In this example, the high poly model of the bumper is made up of nearly 92,000 polygons. This is a model of little use for our purposes. Moreover, if we look at it more closely, we notice heterogeneous patterns with entangled edges (pictures below).
The 3D graphic designer team at L&S will therefore re-do the mesh and re-model the 3D data so that the polygons are as homogeneous as possible and easier to use. Here, we managed to reduce the number of polygons to 4,000 (a reduction of -95.65%!).
UV and texturing preparation
After the retopology, the texturing of the 3D model must be prepared. One of the key conditions is to be able to “flatten” the model: all the polygons of the model must fit in one plane/image, in order to create the “skin” which, once textured, will perfectly wrap the model.
To this end, it is a common practice to use UV or UV mapping.
It is a process that allows to apply or paste a 2D texture (an image) on a 3D model via an unfolding, in a way similar to the cutting patterns used in the sewing world. Furthermore, “UV” refers to the U and V vectors that define the 2D/image plane. In other words, the UV will link a surface mesh to the way an image texture is applied to that surface. This unfolding step of the model is impossible with the models provided as input, because of the number of polygons and the complexity of their layout. Hence, the need to reduce the number of polygons.
Plus, the best way to match the textured skin with the 3D data is to have polygons that are quads, or four sides. This makes it easier to match each quad to a piece of the image.
In the images below, we can see that the technique used for UV unfolding on the high poly data gives a completely stretched and warped result. On the other hand, using the same technique on the low poly data, the result is proper (homogeneous and without distortion).
To check the unfolding processes, and to highlight here the impact of a good or bad unfolding on our 3D models, we use a checker (or checkerboard) to observe the possible deformations.
A good unfolding will give a checkerboard with clean lines and homogeneous proportions. This is the situation here for the unfolding of the optimized low poly model. On the contrary, a bad unfolding will give a deformed checkerboard, with broken lines and heterogeneous proportions, as for the unfolding of the non-optimized high poly model.
So, these optimization steps require some works:
- A first processing of the input data to check them and make sure that they are usable (it is sometimes necessary to make a prior reduction of polygons, to rebuild the structure of the model, etc.),
- The work of graphic designers in (re)modeling and texturing
The data is then ready to be used in the targeted application!
Eager to find more information about 3D tech?
Light & Shadows believes in the potential of 3D technologies. We strive to better meet the needs of each of our clients, in order to provide them with a memorable virtual experience.
You can find all the information about the integration of 3D technologies in the marketing process by downloading our white paper (English version).
Feel free to contact us for more specific information about our projects!
What is the metaverse?
If there is a word that is currently in the headlines outside the NFT, it is definitely the metaverse.
Semantically, the word metaverse comes from the Greek word “meta” meaning “beyond”, and “verse”, the contraction of universe in English.
Thus, the metaverse would mean “Beyond our universe”, a universe transcending our reality, a digital universe. Moreover, the term was used for the first time in the “Snow Crash” book written by Neal Stephenson (1992).
It is a difficult concept to grasp, even to explain, because the metaverse is still in its early stages of conception and remains abstract for many. Furthermore, there are still many gray areas due to the multitude of divergent opinions.
Indeed, in its premise, the metaverse painted a rather dreamlike picture of the next social networks and the internet in general: A place where people could meet and do all sorts of activities, similar to what one could do in a persistent world. For some enthusiasts, this concept is similar to the science fiction scenario of Ready Player One (2011).
Moreover, many consider “Second life” as one of the forerunners of the metaverse. This game released in 2003 still allows its users to embody virtual characters in a world created by the residents themselves. So far nothing very complicated you may say, but the metaverse as we perceive it today would be much more complex: it could mix VR/VR and AR/AR, would be accessible at any time with a notion of market where we would use the blockchain, a decentralized network of trust that allows the transfer of digital values such as money and data, in order to make our purchases.