The goal of this joint laboratory named Cineviz is to propose the Film industry a set of new previzualisation tools that (i) ease the creation of cinematographic sequences in virtual environments before the shooting (previzualisation stage or previs), and (ii) prepare the technical implementation of these sequences during the shooting (technical vizualisation stage or techvis). These tools stand in starking contrast with existing ones essentially based on generic 3D modelers, which are complex to use, not easy accessible to film professionals, and not adapted to specific needs in the movie industry.
This labcom ambitions two major shifts in the field: (i) merging the stages of shooting and lighting for the rapid and creative exploration of cinematographic sequences in 3D environments and (ii) merging the previs and techvis stages for a better preparation of the shooting stage. The expected impacts are a reduction of the production costs through a better preparation, and a support for creativity through a rapid exploration of possibilities.
The Cinecitta project is a ANR funded (French National Research Agency) research project dedicated to previsualisation. The focus was to design smart tools to assist filmmakers in the design of cinematographic sequences. The projects explored and evaluated a new workflow which mixes user interaction and automated computation aspects for interactive virtual cinematography that will better support user creativity. In particular, we proposed a novel workflow in which artificial intelligence techniques are employed to generate a large range of viewpoint suggestions, to be explored by the users as a starting point for creating shots and performing cuts. Typically, users would then reframe the selected viewpoints to their needs, shoot the sequence, and request further suggestions for the next shots. A way of interacting with such a system is by using motion-tracked cameras. Motion-tracked cameras are devices tracked in both motion and position in a real-environment which coordinates are mapped to a virtual camera in a virtual environment. Enabling a proper mix between hints provided by an automated system and interactive possibilities offered by a motion-tracked camera represents an important scientific challenge and potentially leads to a strong industrial impact.