'Current' 2019, Russia, Moscow

A speculation on the future of volumetric broadcasting cinema, participatory archiving, and their impact on urbanism.

Current is a speculation on the future of broadcasting cinema. It emerges from the intersection of contemporary trends in live streaming culture, volumetric cinema, AI deep fakes and personalized narratives. The film Current is an experiential example of what this cinema might look and feel like within a few years based on the convergence of these trends. Artificial intelligence increasingly molds the clay of the cinematic image, optimizing its vocabulary to project information in a more dynamic space, embedding data in visuals, and directing a new way of seeing: from planar to global, flat to volumetric, personal to planetary.

In the contemporary contestations of algorithmically recommended content, the screen time of scrolling between livestreams has become a form of new cinema. ‘Current’ experimented with various AI image processing technologies and volumetric environment reconstruction techniques to depict a future where every past account has been archived into an endless stream. History, from Latin ‘historia’, means the art of narrating past accounts as stories. What will be the future of our urban environment if every single event is archived in real time to such accuracy that there is no room for his-story? This implies an economy of values, that has potential in multiple streams beyond social media, as the content deep learns from itself. Beginning in 2019, ‘Current’ is a continuous series of volumetric films encompassing front page stories that happen around the world, which are broadcasted in real-time using livestream media technologies. Livestream is a new form of moving image, to which the content is generated and broadcasted simultaneously. Its real time quality gives rise to an attention economy that circulates values distinct from traditional moving image media, such as movies and television. First, it encompasses extraordinary moments alongside an infinite feed of the mundane, suggesting a sense of ‘truth’ to its audience. Second, instead of having to sit in for a standardised amount of time, the quality of mundane in livestream enables its audience the freedom to step in and out of the stream at any moment. Third, it facilitates a participatory authorship, where interactions between the audience and streamer collaboratively directs, narrates, and curates the experience. Volumetric cinema is the perceiving of information in a 3-dimensional space. Instead of compressing our 3D world onto 2D planes, technologies such as point cloud and 3D reconstruction techniques record and project data in 360 degrees, which minimises the reduction of the complexity of the image data. It is a form of cinema that is immersive as well as expansive; there is no negative space in any scene, there is no behind the camera - an expanded cinema. When coupled with livestream, it has the potential to preserve every detail of every past occurrence in full scale, directing a new way of perception, from planar to global, flat to volumetric. Along these lines, ‘Current’ seeks to configure a new aesthetic vocabulary of cinematology, expanding the spectrum of aesthetic semblance and intelligence, questioning truth and identity in contemporary urban phenomena. ‘Current’ experimented with a range of digital technologies that are readily available to any individuals (e.g. machine learning, environment reconstruction, low-end 3D scanner, etc.). Alongside the volumetric film, it developed a production pipeline using distributed technologies, which provide a means for individuals to reconstruct, navigate, and understand event landscapes that are often hidden from us.

http://www.current.cam

http://current.cam/film

https://hubs.mozilla.com/TbD6TUn/current-cam

Poster



Details

Team members : Provides Ng, Eli Joteva, Ya Nzi, Artem Konevskikh

Supervisor : Prof. Benjamin Bratton

Institution : Strelka Institute of Media, Architecture and Design

Funding agencies : Strelka Institute of Media, Architecture and Design

Descriptions

Technical Concept : The proposed pipeline is a feedback loop between human-machine interactions. Livestream includes image and meta data, which can be extracted for environment and events reconstruction. With machine learning, we can have estimations on what is behind a foregrounded object, this is perfect to be coupled with photogrammetry frameworks that calculate based on vantage points. Also, ‘Current’ experimented with AI image processing like Autoencoder, which can help to fill in missing information on texture maps based on archived data, and object detection can help to estimate scene descriptions. The output volumetric data will then be plugged into personalisation algorithms that will label, rank, and deliver recommended content by collaborative filtering. Finally, the output will be pulled into displays on demand, which are volumetric navigation engines, like VR devices, and accessed by a network.

Visual Concept : The outsourcing of imagination to AI can most readily be observed in the cultural phenomena of deep fakes and deep dreams. The project experimented with Generative Adversarial Networks (GANs) and Autoencoders to simulate visuals that are uncanny to the mind. These neural networks allow the compositing of multiple visual data inputs, generating infinitely long single takes that redefine the cinematic cut. While humans perform curatorial decisions between multiple sources of data, the machine estimates between images and fills in the voids - a human-machine interaction, an outsourcing of imagination.

Credits

Current

Current

Current