It's time to summarize the implementation of my Particle Level Set fluid simulator and plan for future development.
It started off by combining stam's stable fluid Solver and foster and fedkiw's particle enhanced level set approach. then I immediately jumped to some recent developments such as, vortex participant, MacCormack method, and divergence free velocity extrapoation. some of these extensions are successful, and some not. during the whole process, the foundational Method for levelset has been fast marching Method (FMM) for reinitialization and semi-Lagrangian method for advection. helped by the participant, this has been what Enright et al. (2004) advocated and a series of other authors used (losasso's Ree REE method, gundelman's coupling to thin shells, and losasso's Multiple Level Sets and melting solids into liquids ). octree is great, but its implementation is too complex. without it, however, Results look very bad unless a high resolution grid is used. But the original paper by enight et al. (2002) produced highly realistic result. How can that be?
In recent months, I have been modifying the code to replace FMM with solving reinitialization equation with 5th order WENO and 3rd order Runge-Kutta. futhermore, Level Set advection has also been done by the 5th order WENO and 3rd order Runge-Kutta. participant are advected with 3rd order Runge-Kutta. essential tially I have turned the code into Enright's 2002 incarnation. the disadvantage is the performance hit. but there is advantage too. now the code is readily parallreceived for multi-threading and multi-processors.
Here is the present status, although it's 6 years behind the front of the current research. you may notice all those mass loss when Liquid Sheets thin out. this is the best I can do on this grid resolution.
What's about next? Clearly octree is an ideal candidate, as it can resolve those thin sheets the underlying grid is missing. but it requires some major data structure changes in the Code. in addition, for any decent production simulation, even an octree couldn't handle all the details on a single processor. so that leaves the only option: parallel execution. this shoshould be the way my solver goes. fortunately, my previous code changes make the task relatively easy. for example, ILM couldn't finish the shots for "Poseidon" Unless fedkiw's group at Stanford made their physbam software multi-processor running enabled. they used 8 to 16 processors. frank losasso ran the hero simulation on 40 processors for the maelstrom shot in "Pirates of the Caribbean: At World's End ". moreover, only when
A simulation is run at a sufficient resolution, the removed participant
Are ejected in areas that matches areas where, in reality,
Water's surface tension breaks, so they can be used to represent spray
And bubbles. See image below for reference.
ILM used a new Fluid Dynamics Engine developed in cooperation with Stanford University to create the ocean,
Including wave, turbulence and bubbles.