1. In an empty 3D space, generate millions of randomly positioned, oriented, colored and otherwise propertied Platonic solids. Some of them as big as boulders, others as small as mites of dust.
2. Take a picture of the resulting scene from many different angles.
3. See which of those pictures most closely matches the actual incoming video frame from reality.
4. Randomly change the orientation, size and everything else of all the particles which did not lead to a good result in approaching the video frame.
5. Take another picture from the same angle, feed it back into 4, repeat until you...
6. Train an AI model to intelligently move and change the sand until it approaches the ground-truth video frame exactly.
7. Try to predict the next frame of the video by shifting the sand from its last and previous positions into a most likely continuation. Possibly a transformer architecture with terabytes and terabytes of RAM
This builds a simulation of observable reality with arbitrary viewpoint-based resolution. Old draft: http://zeroprecedent.com/Platonic_glitter.pdf (Photogrametry/LiDAR are not strictly necessary)