Let’s take a look at some concepts that will help you gain a better understanding of layout control in Blender 2.8. In this video, we’ll cover:
Every single window type in detail
Creating and deleting custom tabs
Creating custom themes and loading from the theme preset library
Saving your custom workspace as the new default
Splitting and joining windows
Changing window types
This is one of several upcoming Blender tutorial videos, so stay tuned for more! If you’re not already, consider subscribing to be notified when new videos are posted. If you’re just getting started in Blender, check out my introduction video. Should you have any questions, feel free to drop a comment below, or ask using my contact card at the bottom of the home page.
The transition from Blender 2.79 to 2.8 has completely changed the way that users are able to take control of their layouts. To be honest, the default tabs for layouts have worked for my purposes about 95 percent of the time. Every so often I find I need to pull up a new window to create a timeline that didn’t exist before, but that’s about the extent of my layout modifications. Good luck, have fun, and keep creating!
I will say, this project was born out of a strange, confusing, and quite frankly stressful time. My job description was very much up in the air, and and there was even uncertainty around this entire project. Unfortunately, this project never saw the light of day, but I have been searching hard drives all over the office to find this one and I finally found it! I just wanted to post it because I still really love the idea, and I will probably try and revisit the concept and try it on a special page here on mattjones.tech.
Assignment
The ask for this project was to create a web page that really engaged visitors and explained the concept of baptism along with various beliefs about baptism that were applicable to the church. Admittedly, a largely vague assignment with plenty of room for interpretation. So instead of creating a typical page design with images, I decided to take it up a notch.
Concept
The existing site was built on WordPress. The idea was to simulate a water splash in 3D, “freeze” the water in time, create some text blocks in and around the different areas of the splash, and animate a camera to stop at each of the text blocks. Once the splash and camera animation were created and in place, the objects, animations, and materials would then be imported into a ThreeJS scene. Once it’s imported to ThreeJS, the appropriate tweaks would be made to make the render look similar to what I had in Blender, then just animate the camera along a path based on scroll direction. Once that was complete, just paste that code block onto a WordPress page and you’re done!
As you can see from the video above, I was able to simulate the water, block out some dummy text, and get the camera animations in place for a proof of concept. The next step would have been getting approval from the client, but unfortunately, the project was dropped before it could progress to the next stage. However, I’d love to just take this project all the way to completion just to see if it is even possible. In all honesty, this is a workflow I’ve never executed in a real world scenario, but conceptually, I can imagine all the technical bits coming together fairly seamlessly.
Thoughts on Design
Looking back on this rushed design, if I were to reattempt this project, I would probably design a completely vertical experience. This would more closely mimic the actual movement of being baptized, and give opportunities to stop at each stage of submersion and rise for explanatory blocks of text. A vertical design could also have a great opportunity to shine on mobile screens as well.
Emphasis on Flexibility and Ease of Use
The only thing I can anticipate rethinking would be the implementation of the text blocks themselves. “Real” HTML text blocks (“real” as in “easily editable”) would be extremely beneficial here, as it would meet the almost guaranteed need to modify the text later. And when you’re rendering a 3D world, live, right in the view port anyway, it makes sense to just implement live rendered, and easily editable text as opposed to Blender text objects converted to mesh for rendering in the page.
Well… I guess it’s time. I’ve been working on this animation for longer than I care to admit, but I’m definitely ready to release this thing into the wild, take my lessons and move on. And when I say ‘lessons’… I mean LOTS of lessons. And I’m so glad I tackled this project the way that I did. I had some triumphs and some failures, and best of all I learned more about 3D animation during this project that I have in a very long time. Lots of familiar concepts like cell fractures, rigid body physics and particle sims, as well as TONS of new stuff like character animation, rigging, and interactive cloth simulation, clothing stitching, procedural shaders and loads more. So here it is in all its glory, the intro animation for “Never Forgotten”:
In preparation for my first ever animated short film, I’m gonna need some characters. Which means I’m gonna need to create some. Which means I need to get better at sculpting. That means sculpting bootcamp.
This is just a personal sculpting bootcamp challenge to help me improve my sculpting and pay more attention to human anatomy. I’m not sure how long this challenge will go for, but I’ll just keep at it until I feel like I can sculpt out a character without any real heartache. My goal is to just be able to take a reference image(s) and reproduce something similar that I’m happy with.
I think it’s just a matter of developing a level of skill where I can 1.) recognize what how a human body and face is supposed to look, so that I can 2.) know why the sculpt doesn’t look right, and 3.) know how to fix it. That’s essentially my goal with this self-assigned sculpting bootcamp.
So here’s the very first attempt at a thing with no reference. Tomorrow I will have a reference. I swear getting started was the hardest part for me. But hey- here’s to learning something new.
A few weeks back, I tried my hand at creating desert dunes out of a plane. As part of that same project, this week I’m using the cell fracture addon to destroy some statues. I started off with a couple of characters created in MakeHuman. A couple people on blender.chat mentioned that MakeHuman hadn’t been updated in a while, or wasn’t being actively developed. Regardless, the version I used was super easy to generate a couple of characters to use for statue destruction.
Exporting to Blender
Once you’ve got everything how you want it with your character, it’s time to export. Going from MakeHuman to Blender used to be difficult and require a special plugin and weird file extensions. Now, you can just kick out a simple .DAE file and drop it straight into Blender.
Using the Cell Fracture Addon
Once in Blender, I posed the characters exactly how I wanted them. Once I got the pose, I applied the armature. In edit mode, I separated the parts of the mesh I wanted to fracture. I didn’t want the whole thing, just bits like the hand and shoulders. Next, I fractured using a small number of pieces, around 50. I had to mess with the level of subdivisions and number of pieces to avoid weirdness in complex areas like hands and fingers.
Animating vs Simulating
Typically, I’d simulate the pieces after they’re generated. But for this effect, I wanted a surreal, hyper slow-mo look. For this, I just hand animated the pieces I wanted to break away.
Shoutout to @loranozor for requesting this walkthrough! I don’t do a blender smoke simulation every day, but one of the biggest takeaways that I got from learning my way through this project was the difference between the resolution divisions of the smoke domain and the resolution divisions under the “high resolution” checkbox.
Smoke Domain Resolution
Basically, as I understand it, the resolution of the smoke domain defines how many voxels are used in the simulation. The higher the voxel count, the more accurate the main body of smoke. Use domain resolution to shape the main look of your smoke sim. If I’m not mistaken, I believe the little cube in the corner of your smoke domain helps you visualize the size of a single voxel, so you can get rough idea of your simulation scale before you even bake it.
“High Resolution” Divisions
Once you’ve got the main shape and behavior of your simulation looking the way you want it, it’s time to enable the “high resolution” checkbox. This is essentially like applying the subsurf modifier to your smoke. It keeps it’s main general shape and behavior, but the high resolution divisions add that extra little bit of “whispiness” for added realism and resolution.
If you’re interested in learning more about blender smoke simulation, check out Mantaflow. It’s a great branch of blender pushing the boundaries of smoke and fluid sims!
My name is Matt and I’ve been using Blender for over 10 years. Today I came to understand the difference between subsurf vs multires. I’d like to share that information with you now.
Subsurf?
Subdivision Surface is a modifier that adds virtual geometry to your mesh, giving it a smoother appearance. The extra geometry isn’t there until you apply the modifier. The extra geometry is added evenly, across the entire mesh.
Multires?
The multiresolution modifier adds editable virtual geometry to your mesh. The extra geometry is editable in sculpt mode, allowing you to add finer detail to parts of your mesh, leaving other parts untouched. You can step up and down the different levels of resolution, retaining selective detail.
Best Use Case? Which One Do I Pick?
Most of the time, I use Subsurf. It’s just a general, quick way to add extra geometry and smooth out your model. Mulitres is best and almost exclusively used for sculpting. Once you get that extra detail in there, you can use that high poly Multires model and bake out a normal map to toss into your material. TLDR;
Subsurf: general smoothing.
Multires: specific to sculpting high details and baking later.