Review: Creating Simulations for Autonomous Vehicles

Freek Hoekstra has joined Applied Intuition team and talked about the way they procedurally create virtual test environments for autonomous vehicles.

In case you missed it

Our previous interview with Freek

New Turn in the Career

80.lv: Freek, it’s been a while since we last talked. What have you been working on? You've moved from one company to a very interesting new place where you’ve been working on exciting things.

Yeah, I did! I joined Applied Intuition which has been phenomenal (we’re hiring!).

We work on simulation for autonomous vehicles, creating virtual test environments where our customers can test their autonomous vehicle algorithms safely and effectively against consistent scenarios before they go on the road.

I’m very passionate about this field. I think autonomous driving will help society in many ways and I’m super excited to be a part of this change!

Virtual Testing

80.lv: Why use Simulation? Can’t you just test vehicles in the real world? 

Great question, there's a couple of good reasons for this. First of all, typically, when testing in the real world it's very hard to guarantee that you fixed a bug - reproducing the exact same scenario is extremely challenging, as the real world does not repeat itself.

In the virtual world, we can run the same test over and over again until it works and also generate thousands of similar cases to be absolutely sure the issue is fixed. 

Also, every time there's a change, we can run automated tests to make sure there are no regressions, and we can do that without deploying the tech to the vehicle which takes a lot of time. Furthermore, we can run all those tests simultaneously in the cloud. It’s very fast compared to testing it with a handful of vehicles that need to be operated with safety drivers who, in their turn, need to be briefed about the changes and can’t run 24/7. It all comes down to speeding up the development cycle for autonomous vehicle engineers.

Ideally, the testing in the real world should be specifically targeted at known problem areas or areas where virtual scenario coverage is low.

Work Tasks

80.lv: Can you talk a bit about the role you currently have? What are your main tasks today at Applied Intuition?

My main task is to create the content to fill the worlds for our sensor simulation product. There are many sub-products we have in our simulation suite, all focused on testing, validating and developing different aspects of autonomous driving. I have been working on creating tools for automatic photorealistic procedural environment generation.

These environments vary from city to suburban, rural to even offroad, forests or deserts. It's very variable and creating a solution that supports all of this is extremely challenging, but I wouldn’t have it any other way.

It’s my job to make those environments, but also to create consistent materials within the environments, so that sensors deliver accurate returns. Not just for cameras, but also for Lidars, Radars, and more. This last bit is very important - representing sensors in a physically accurate way has become one of our key competitive advantages and it is honestly hard to distinguish them from reality, even for us.

Another aspect is the weather. Having materials that accurately react to rain, forming puddles that affect Lidar returns, and really providing a platform that mimics the real world in all conditions is crucial for our clients.

Our pipeline is proprietary, so we can’t say too much, but roughly it works by taking in lanes and then creating a unified road-surface out of them. Then, we add sidewalks, planters, etc. next to the road and buildings, trees, parking lots, etc. in the negative space. After that, we finally apply our shaders.

A quick look at our pipeline, input to output (note that this is all automated and nothing was hand-placed):

1 of 4

Collaboration with Toyota

80.lv: Tell us a little bit about this fantastic new project with Toyota. What were the main goals here?

We had really great cooperation with the User Experience team of the fabulous Toyota Research Institute - Advanced Development, Inc. (TRI-AD UX) and worked together on a demo for the Tokyo motor show to demonstrate what an autonomous vehicle experience would feel like in 2030.

We accepted the challenge to produce a procedural pipeline that allowed us to create many iterations of the requested environment to support an interactive experience in a simulated driving scenario. This complemented the amazing work done by the growing team at Toyota Research Institute - Advanced Development (TRI-AD), who worked on the vehicle and self-driving aspects of Arene, their UX Design, Research, and Simulation platform.

Over the course of the project, we made lots of changes to sightlines and the vehicle path to minimize nausea which was a concern as we actually used an actuated platform with a VR headset.

Overall, we’re very happy with the quality of the final product given the short timeframe, the high frame-rate requirements and many revisions, and we want to again congratulate Lexus on a very well-received product.

It’s also worth noting that it's not just a demo, it's actually an active research vessel, allowing TRI-AD UX to test different UX layouts and designs and maximize comfort and usability in their future vehicles.

If you want to read more about it, we have a write-up here.

Tools Used in the Pipeline

80.lv: What were the solutions you’ve used here and why did you decide to use procedural generation tools to create the training space?

We use Houdini a lot, but also other packages where needed. All the procedural tools are custom, although occasionally we borrow from the excellent work of Paul Ambrosiussen and Luiz Kruel at SideFX Labs.

The choice for proceduralism was a must. In order to be able to generate worlds of this magnitude and at the required speed it's the only choice. We get requests to build worlds the size of GTA, and we don’t have years to make them - when we get a map, we strive to have a version within days to a week depending on the complexity.

Furthermore, we need to create multiple variations of the same scene and producing that by hand would be so cost-prohibitive, that it would be impossible.

We also use photogrammetry, Maya, Blender, Quixel suite, Substance, and more -
whatever ends up being the right tool for the job.

Terrain Generation

80.lv: Can you tell us a little bit about the way you’ve approached the asset production for these amazing pieces of the landscape? How did Houdini and its non-destructive workflow help you create better and more interesting spaces?

We tried a few things. Originally, we got OSM data and DEM data for the heightmap and recreated the area that was shown in the promo video (see below) but it was ultimately decided that this was too intense, and a more relaxing experience was desired.

So instead, we made a manual path curve that could be adjusted to what the project needed and created a terrain system that would automatically adjust the landscape to fit the road and produce a steep rock face where the road would cut into the mountain. It also had the ability to automatically add camber and tilting towards the inside of the turn and controls to customize the strength of these effects.

A big thing Houdini helped us with was the rockface. Normal terrain tools don’t tend to deal well with vertical cliffs and overhangs, and we did not have the time to manually place a bunch of rocks whenever we moved the road. 

Another huge one was all the plant instances and the road UVs. It was all automatic - re-adjust the path of the road and 2 minutes later everything is in the engine and you’re testing the new track.

Import into Unity

80.lv: How did you assemble it all and visualize the final project in Unity? How does it work? What are the interesting things you’ve learned when adapting all these huge pieces of terrain to Unity?

We knew that we would always stay on the road, so we created a heavy optimization pass that would aggressively clip anything outside of close-up views, based on the distance and also occlusion. Poly-reduction was used a lot, mainly based on the distance from the track.

Everything was imported to Unity either as an .fbx file written to the unity folder or as a .bgeo file for the instances which we could then control the density of and randomize their positions.

The total terrain is actually only ~150K polygons and heavily optimized, so we didn’t need to get into streaming. It all fit very comfortably on the GPU and ran at over 90FPS.

Map Size

80.lv: If you’re building a space for AI and car training, how big can you actually get? How do you approach the generation of urban spaces or bigger pieces of space for training? Is it all procedural?

We can synthesize road layouts, use HD maps, (lane-based map formats) or convert OpenStreetMaps as required.

The size of the map depends, but we’ve done some seriously big maps of well over 50km^2, similar to an open-world game. At that size, with very dense urban environments, you can start to hit performance issues, whereas with rural environments you can scale up considerably more.

Fidelity vs. Randomization

80.lv: While you’re building a virtual space for AI training, how important is it to have it all precisely realistic? And what are the challenges of building the content of this type for AI testing in general?

Many people grapple with the question regarding fidelity, and "how good is good enough" speculation is still an active area of research and debate. Some immediately jump to physically accurate path tracing and perfect photorealism, while others don’t focus on fidelity at all and solely focus on domain randomization.

We’ve found that the answer can vary for different use cases and sensors and therefore have developed a variable fidelity approach. The domain gap between real and synthetic data can vary hugely depending on the focus of the AI task. 

Overall, I’d say fidelity is important, but variety is the key. There have been many papers that suggest mixing synthetic data with real data can improve the real-world performance which enables us to fight biases we see in our data. For example, we rarely see a vehicle upside down, so there may not be enough examples in your dataset, but we can generate that data easily. Really interesting research by Joshua Tobin and his co-authors shows that in fact, the best strategy may be to randomize everything that you want the ML algorithm to ignore

For us, the other key has been testing and validation. Just like people validate PBR, you need to validate that your content works correctly with all sensors that you model.

As you can tell, we put a lot of emphasis on making things look believable. But at the same time, we target large worlds and solid performance, even with multiple sensors simultaneously.

Afterword (And Job Opportunities)

As mentioned before, we are hiring! Especially skilled Engineers and Tech Artists. Please, take a look at our webpage to see if we have something for you. Or if you think you have a specific skill we could benefit from - make sure to hit us up!

Besides, feel free to ask questions - I’d be happy to answer them in any way I can. Overall, I think my work on the autonomous application has been an amazing opportunity to use my CG powers for good, and I couldn’t be happier.

Freek Hoekstra, Procedural Artist

Interview conducted by Kirill Tokarev

Join discussion

Comments 1

  • Anonymous user

    Hello everyone, Freek here,
    If there are any questions, would be happy to answer them here, wherever i’m able :)

    0

    Anonymous user

    ·4 years ago·

You might also like

We need your consent

We use cookies on this website to make your browsing experience better. By using the site you agree to our use of cookies.Learn more