Marginalia's new blog
Marginalia Project's blog has now a new address http://www.marginaliaproject.com/
Besides changing our address, we have redesigned the whole site [which is now powered by wordpress] and added some categories that provide further information on Marginalia's past, current and future projects.
Our archives contained here have been integrally transferred to our new domain and can be accessed directly there. Therefore, from now on we won't be updating this blog anymore.
Please update your bookmarks.
http://www.marginaliaproject.com/
marginalia at siana brazil 2009
From June 30th until July 12th, Marginalia Project participates of Siana Brazil 2009 [International Week for Digital and Alternative Arts], the Brazilian version of the renowned French event in its first edition abroad, held in the city of Belo Horizonte. In the collective exhibition at Palácio das Artes, Marginalia presents the installation Anamorfose I, a second installation version of the Chronotopic Anamorphosis project, developed last year by the group. In this exhibition, the installation has a door built to it, in reference to the most popular part of the demo video released last year at vimeo.
We're getting to know great people and great works. Thanks to the organization and artistic coordination of the festival!
chronotopic anamorphosis at guignard art school
Marginalia Project participated in early june of the collective exhibition of the Amostra de Vídeo Guignard, a week dedicated to video projects at Guignard Art School, of the State University of Minas Gerais, Brazil. The week had all kinds of interesting video projects, from installations and performances to experimental and documentary videos, and was organized by students of the institution.
At the school's gallery, Marginalia presented a simple installation-like version of its popular software Chronotopic Anamorphosis, made about a year ago, in 2008, soon becoming a hit at Vimeo, through its demo video. For this exhibition, the software was adapted to Open Frameworks libraries, gaining faster video processing through the advantages of C++ language. With a webcam enabled computer and a projector, the public there could interact with the system in real time and distort themselves in the projected image.
Thanks, Nairza, for the invitation and everyone else that worked hard at the school to make it happen.
marginalia beyond the andes: Interactivos? Lima '09
Interactivos? is a project developed for some years now by Medialab-Prado as a temporary and itinerant lab for the development of creative projects with a technological base. it had already been held in cities like Madrid, Mexico City and New York and, last month, it came to South America for the first time, with the coordination of Marcos García and Laura Fernández, from Medialab-Prado; the production of Mónica Cachafeiro, also from Medialab; and with teachers Diego Diaz and Clara Boj [Spain]; Julian Oliver [New Zealand] and Kiko Mayorga [Peru].
in the beginning of the year, there was an international open call for projects to be developed during the two weeks of the lab. eight proposals were selected and, next, there was an open call for collaborators interested in working on one or a few of the selected projects. I enrolled myself as a collaborator for the project Frame Seductions, by artist Pierre Proske [Australia] and, departing to Lima, I integrated the collaborator team of the project, in which also figured Federico Andrade [Argentina - Modular colective], and Billie Pate [USA].
in Frame Seductions, we developed an audiovisual interactive piece in which portions of a panoramic video were shown accordingly with the relative position of the visitor in front of the screen. therefore, the visitors movements in the space modified the section of the image that was seen, allowing to see beyond the framing of the image - at least virtually, because, eventhough there was image beyond the edges of the screen, it wasn't infinte, and had also its framing. curiously, eventhough Pierre didn't know it in advance when he idealized the project, the relation between the spectator and the image is precisely the one seeked - without success, by the character from Les Carabiniers, film by Godard, in his first cinema screening [check out in a previous post marginalia 0.2 - cinematographic appropriations].
from start, my interest in Pierre's project was related to the similarity between its conceptual proposal and our own, at marginalia project, as well for a personal interest i have towards new media works that are closely related and reflect about the "old media". in Frame Seductions case, we worked with framing, one of the most fundamental elements of visual arts and, particularly, of the technical image and the cinema. it's precisely for the act of framing that the artist stablishes the two most important dimensions for cinematographic language: the field and the out-of-field; the visible and the invisible. and, in the case of cinema, the out-of-field, has also a 'unstablizing' role of the filmed scene, being one of the most importante element in the ellipsis of movie grammar and in the creation of suspense - as in the classic Rear Window, by Alfred Hitchcock.
in the conception and execution of Frame Seductions, we tryed to work conceptually those aspects through a tension play among the field and the out-of-field which was opperated in the moment of interaction. in our discussions, we understood that the mobility of the framing doesn't implicate the extinction of those tensions, but rather their increase. it was in this sense that we worked on the images and on the responses of the system to the visitor's participation.
my practical role in the project's development was related to video production and processing, studying different techniques to the creation of panoramic videos, as well as different compression settings that would allow the rapid processing by the system. the programming was done by Pierre, using the openFrameworks libraries and algorithms for face detection and tracking. on the course of the process, I could also work on the conception and the system for a modularity of the video image, by spliting it into sections that could be swapped while the espectator wasn't looking to that part. this way, the out-of-field image became less stable, creating some interesting surprises in the course of interaction [more info about the project at the workshop's wiki].
at the end of the workshop, I also had the opportunity to briefly collaborate with Masa project, by Andrea Sosa [Argentina] and Rolando Sánchez [Peru], giving some consultancy on compression, resolution and other technical video matters.
as a parallel activity of the workshop, I also participated of Lima's Pecha Kucha Night, together with the other collaborators. Pecha Kucha is a kind of project presentation evening in which each presenter must use 20 slides which are programmed to last exactly 20 seconds each, totalizing a 6min 40sec duration. trying to work directly with the limitations imposed, my presentation was titled "0.05 fps" and I showed some images of my video "Paik for Kids" while talked a little about my motivations to work with audiovisual and new media. the response of everyone there was really good, and my presentation ended up in the Pecha Kucha Daily site, of the international Pecha Kucha community.
paticipating of the workshop in Lima was an incredible learning, collaboration and research experience. it really created a perfect environment for collective experimenting and I hope to have the opportunity to participate in other editions of the event. it was also great to meet the other participants - people with interests so close to ours, from other parts of the world, in special Latin America. many of the people there heard me saying some times how we - latinamericans [and particularly Brazilians!] know so little of what happens around us and for that we miss great opportunities of working together with our continent neighbors. I hope that the contact stablished with people from Argentina, Chile, Peru, Colombia and Mexico can become partnerships and collective projects in a close future.
diachronic blending - experiment
diachronic blending from Marginalia Project on Vimeo.
we have some great news to announce. but before we do that, we would like to introduce the first experiments with diachronic blending – developed in processing -, by pedro veneroso. this particular video is composed of four different manipulations of the same video sequence and represents the first test of the software.
the diachronic blending effect consists of merging frames on a time progression using a set of different merging techniques blending an array of frames individually each time the screen is rendered. the outcome is a fluid and distorted succession of images that vaguely document the way in which movement evolves, while transfiguring both actions and forms into mutable and organic shapes.
furthermore, for this particular video a 'blur' class was created to implement randomly generated blur to each frame of the video on an attempt to develop organic-ish forms based on movements of rigid structures.
so far the code for the software that will control this effect is still under development; as soon as we have it done we will release both the source code and the software.
-
for these experiments we used raw footage of the feature film 'os residentes' [yet to be released], by tiago mata machado. actress simone sales brushing her hair violently.
marginalia goes to AEurásia
For the past few months the team of Marginalia Project has been involved on the development of a series of communicating softwares [we call the whole system AETeletransporter] which makes the project AEurasia work.
AEurasia is a Belo Horizonte based interactive video-installation by Cláudio Santos, Alessandra Soares, André Amparo and Fernando Maculan being exhibited at Espaço 104 from December 12nd to 20th of 2008. Sketching a brief description:
"The video-installation AEurasia presents a trip around the world inside of Belo Horizonte, Brazil. Specific locations where selected in the city based on the correspondence existing between their names and diverse chosen locations of the world. The diagram resulting of the connection of the points representing each elected spot is a map tracing the world inside a city. Four main concepts shall conduce the spectator's experience: difference, distance, time and mobility. A stimulus to acquire better knowledge of the city we inhabit."
Seven videos are presented to the spectators sketching objective and subjective relationships between corresponding places, one of which is a distant city while the other is a location in Belo Horizonte. An eighth projection summarizing the whole installation responds to actions of the spectator, capturing his image in real-time and placing him in the video he watches by cropping only his body from the surroundings [this procedure is mediated by a software we call AESoulCaptor which is basically responsible for doing a real time Chroma Key]. By doing so, the spectator is merged to a distant location as if he had visited it: while he can enjoy interacting in real-time with all the locations, a postcard is produced as a souvenir of his 'traveling' experience inside AEurasia, this imaginary continent.
The spectator can, then, retrieve his souvenir when leaving the installation using a terminal running the software we call AECheck-in which is responsible for presenting every image produced that day in order for the spectator to choose his favorite photo which can then be sent to himself or others by e-mail and uploaded to AEurasia's Flickr.
Among the many softwares developed by Marginalia for this project, two main applications are of greater importance: AESoulCaptor and AECheck-in, developed, respectively, on C++ using the open source library openframeworks and Processing 1.0.1.
This project represents the first attempt of Marginalia Project on working with the library openframeworks for C++ and AppleScript with outstanding results so far; Processing has been employed in order to develop a user-friendly interface, which is the first attempt of Marginalia Project on developing this sort of programming practices [interface-based softwares]. These new fields of work are of major importance from now on, since Marginalia will be developing the projects to come [soon to be announced] mainly on openframeworks and taking advantage of Apple features such as AppleScript, Quartz Composer and Automator.
AETeletransporter is a system working simultaneously on two connected terminals composed of the following softwares: AESoulCaptor [openframeworks], AECheck-in [Processing], AESender [AppleScript], AEConnect[AppleScript]. In addition to the development of the already mentioned system, Marginalia Project worked on the making of the videos and documentation of the installation.
More information on the new projects of Marginalia will be given on the following posts which are intended to briefly cover present and future projects of the group.
General view of the installation showing the chroma key region.
marginalia goes to são paulo
as a follow up to the already announced first prize achieved by marginalia project at the Technological Connections Festival, held by the Sergio Motta foundation, from São Paulo, marginalia project went to São Paulo for some days for the festival's awarding ceremony, at the Latin America Memorial, by the 14th of august.
There, we had the opportunity of putting up the winner prototype for its presentation to the public. With a better structure than previous tests, the presentation came out pretty well, with an interesting observation of the positive response from the public.
The festival's organization also made available, at the ceremony, a preview of the festival's e-book, containing information about the projects and texts written by the festival's comission and jury. We highlight a text written by José Cabral Filho, professor of the Architecture School of the Federal University of Minas Gerais, who wrote specifically about marginalia, and whose text we wish to soon make available at this blog.
us at the exhibition of the awarded prototype marginalia 1.0 BETA
public exploring the installation.
we and the other people awarded at the festival.
marginalia project in first place
The awarding ceremony will happen on the 14th of August at the Latin-America Memorial, São Paulo.
marginalia project finalist at technology festival
The festival will award the best technological projects developed by Brazillian college students from areas such as Architecture, Art, Communication, Computer Science, Design and Engineering. The final result is to be announced by the 7th of July by the final jury: Giselle Beiguelman, Marion Strecker, and Ricardo Ribenboim.
More information about the festival [portuguese] here.
Chronotopic Anamorphosis - experiment
Chronotopic Anamorphosis from Marginalia Project on Vimeo.
It's also a part of marginalia project some parallel and autonomous projects developed by the group members. In this video it's demonstrated a software developed by André Mintz as a programming exercise.
The objective was to transpose to Processing programming environment [www.processing.org] to real time exibition, the effect created by Zbigniew Rybczynski in his movie "The Fourth Dimension", of 1988. The video image is fragmented into horizontal lines and then lines from different frames [different times] are combined in a same moment in the display, creating a distorsion effect from the movemente of people and objects throughout time. It's what Brazilian researcher Arlindo Machado calls 'Chronotopic Anamorphosis'.
The program still has some memory issues, causing some bugs in the image, specially when the image display is combined with video recording. We're working on this issue, but, meanwhile, it's still fun.
We'll soon make available a link to the code. For now, it can be accessed directly from Processing's forum.
marginalia 1.0 BETA - first test
Marginalia 1.0 BETA from Marginalia Project on Vimeo.
The first effective version of Project Marginalia was completed with the test of the audiovisual installation Marginalia 1.0 BETA, on 5th July 2008 in a residence in the city of Córdoba, Argentina [where André Mintz lives temporarily]. The system worked well, as expected, in spite of delaying response to gestures made by the spectators, which was caused primarily by the system's unsatisfactory processing time and limitations of java code to real-time video response.
Developed by Project Marginalia's team on Processing environment, the software GestureMapping [version 1.1.1] was used to create an installation in which the spectator is able to define the area in which a projection occurs by directing the light of a torch to specific areas of the screen. A camcorder captures this intervention and communicates with a computer that interprets received data. The software then recognizes the illuminated area and uses this mapping to generate the final image that is projected by a multimedia projector over the screen.
A group of spectators – composed of other residents of the building – got surprised and satisfied with the achieved results, reacting to the installation in various ways. The majority used the system to explore the projected image itself, using the torch light to reveal what was hidden by the darkness of when they entered the room, while others enjoyed the possibility of drawing diverse shapes and theirs names with the torch light.
For more information on this project, please check the previous posts marked marginalia 1.0; to know the history of its development explore the whole blog.
marginalia 1.0 BETA - the footage which was used
Marginalia 1.0 BETA - video image used in the test from Marginalia Project on Vimeo.
On the first test of project Marginalia 1.0 BETA, which took place in Córdoba, Angentina, the spectators illuminated portions of a screen with a torch to reveal a hidden video [available for streaming above]. Despite the simplicity of the image, it was chosen for some reasons, which are going to be presented further on this post, as part of the process of creating the installation.
The main reason for using this video to test the project was the existence of a text in the image, which the authors judged as one of the simplest and most effective ways of guaranteeing the interest of the spectators on exploring the image. As predicted, once the spectators noticed the existence of the text reading it became their main motivation on exploring the installation, turning it into some sort of mission on their experience of the work.
An interesting idea for the concept of the work was provided by the content of the text. The phrase that can be read on the wall, written in spanish and of unknown authorship, states: “There are known and unknown things and in between there are doors”. Opening doors, action for which this graffiti apparently stands for, is analogous to some ideas that nurture our project, asking for the spectator to interact actively with the work to see the images in exhibition. If the spectator does not engage to what is expected of him, the images may not be seen and, therefore, will not be apprehended.
“68/08”, the date that is inscribed in the graffiti, indicates the forty years that have passed since the events of 1968 took place on diverse countries and defines an horizon for interpretation of the text linked to the ideals of that moment [cultural contestation of academic, religious, moral and political values of a generation], orienting the ask for participation to a wider plane of questioning beyond the purposes of the exhibition itself.
The choice of recording a fixed and static plane is due to some reasons as well. It was necessary to guarantee that the spectator could orient himself visually, as the apprehension of the image would be restricted to small visible portions, and that he could organize the exploration of the image seeking what remained unknown to him. A secondary reason for this choice to be made was guaranteeing that the video file remained small in size, facilitating its processing by the software GestureMapping. We have also chosen this format in order to prevent much movement on the image, creating an atmosphere in which the spectator, while reading the text, could be surprised by a person walking in front of the wall, configuring a rupture with his primary idea that the image was completely static.
While testing the installation, however, the image presented some issues that will undergo further studies and analyzes for the making of a new version. The main problem that could be perceived was the high contrast between the wall on the left [light] and the background on the right side of the image [dark], which caused the spectators to think there was no image on the darker side of the video, identifying the limit suggested by the wall as the limit of the video itself. The movement of people walking in front of the wall revealed to be uninteresting due to speed issues, and in most cases was not perceived by the spectators.
Henceforth, it is necessary to explore better ways of dealing with the interest of the spectators for the image and what concepts it should present to the viewer, with particular interest on what we intend to present with this work that goes beyond its interactive, formal and aesthetic proposals. Among the diverse aspects of the project, its conceptualization and physical disposition are still to be explored in details, configuring the main preoccupations of further instances of development of the project.
marginalia 1.0 BETA - GestureMapping software / source code
marginalia 1.0 BETA - GestureMapping software
The software GestureMapping [version 1.1.1], developed in Processing environment by Project Marginalia's team, is responsible for interpreting data for creating a projection on the space of the installation planned for the project.
The visual/experiential interface of the installation consists, superficially, of a projection over which the spectator is able to interfere by using illumination devices; and torches which will be supplied on the installation to be used with this intention. The processing structure is composed of interconnected equipments responsible for capturing and exhibiting video in real-time, making it possible for the spectator to interact with the system.
In order to capture video in real-time, a camcorder that records a specific area of projection is connected to a computer through FireWire port; a computer mediates the interpretation of received data using the software GestureMapping [version 1.1.1], which processes images and merges real-time video with a video loop; finally, a projector projects the result of the software intervention in real-time exactly over the same area which is captured by the camcorder.
On this process the interaction between all the elements of hardware is mediated by the software GestureMapping [version 1.1.1]; its functions are:
- to receive data from the camcorder;
- to interpret the captured image following predefined parameters;
- to merge the visual data received from the camcorder in real-time with frames of a video loop;
- to output a real-time video projection.
At first, the software creates a set of values [array] which has a number of elements that equals the total number of pixels of the captured video, in order to index each element of the array to a specific pixel of the captured image. The software, then, interprets each frame of the video defining the elements of the array based on the brightness of each pixel according to a predefined threshold value. The result of this comparison between the brightness of the pixel and the threshold value [greater, lesser or equal] determines which mathematical function is to be used to modify the values of the correspondent element of the array. If the brightness of the pixel is greater than or equal to the threshold value [if the spectator illuminates this area], the value of the correspondent element of the array increases; following this logic, if the value is lesser than the threshold value, the value of the correspondent element of the array decreases [multiplied by a factor which has a lower value if compared to that of the first expression].
This procedure is repeated for each pixel of the image. Once every pixel is analyzed, the values stored in the array are transposed to the pixels of the image that will be merged with the video loop, in order to determine the brightness of each of this pixels in a scale that ranges from 0 [black] to 255 [white]. Once completed, the procedure is initialized again for the next frame of the captured video, maintaining the values which were stored on the array during the analyzes of the frame which will be employed on the operations of the next frame. By transposing the values stored in the array to the pixels of the image following these premisses, a trail is produced and gradually fades – according to the incidence of light on specific portions of the image.
The image which was produced based on the data stored in the array is, then, merged – using multiply – with a hidden video loop. This image serves as a mask that defines visible and invisible areas of the video. Merging the video to the mask with the multiply method, allows the black areas to remain black making it impossible for any layer that lays behind the mask to be viewed, while white areas define the opacity of the layer, making it possible to see the video on this areas.
Finally, the projected image, after being processed by the software, is updated completing the cycle.
marginalia 0.3 - origins and later productions
orage dans une tasse de thé from Pedro Veneroso on Vimeo.
orage dans une tasse de thé was firstly conceived due to the desire of animating the homonymous photograph [which can be seen on the post marginalia 0.1 – photography experiments] creating a short film that explores the aesthetics of long and multiple exposure dealing with the possibilities of associating photographs and drawings in a digital environment.
On 2006 begun a research of photography techniques guided by the utilization of alternative techniques of illumination that allowed controlling exposure on specific portions of the image [scene], as well as the utilization of the light source as an element of the composition. Space and time and their manifestation on photography became important factors put into evidence by this technique as the resulting photograph is intrinsically dependent of this concepts.
Later developments of these techniques include: the employment of similar techniques on different mediums [such as video]; exploring the aesthetics of long and multiple exposure on diverse medias; and, recently, the marginalia project, that summarizes the whole production of the authors on the subject so far, specifically referring to the techniques of long and multiple exposure, combining video, photography, conceptual development and programming to produce an interactive installation in constant development.