The So Far of ‘Shooting with the Interface’

In this post, Anand Kumar Jha, one of the researchers who received the Social Media Research grant for 2014, introduces his proposed work.


anand kumar jha - research plan
See the image in full size.

It was during a brief vacation in March that I started reading about the technological footprints of big data. Interfaces have collected data from the users for years for purely functional and legal reasons. I remember filling a long page of questions to get my first email account  in 1999 in a dingy cybercafé in Paharganj. I have changed mail ids since then and forms have become shorter, they auto populate relevant text and autocorrect any mistakes, textual or logical. Every interaction is a movement in a virtual time and space, a spectacle for an eye. Actions, behaviors and events in physical time and space create subjects for the camera. Camera exercises its discretion, locates a specific frame of interest and cuts it out of the flux of the event. It creates an image. The image is then re-contextualized to create a simulacra [1]. A simulacra that stimulates actions behaviors and events in the real time and space. 

I could see this being played out in in the virtual space. Users access the web through the interface generating an event, captured as a log in the server. Millions of these events get parsed through filters to be analyzed on dashboards, another interface. Dashboards then optimize the user facing interface to streamline and increase the number of events happening. Having worked in the interface and big data for a while, I was interested to find out if there were more ways in which an interface mimics the camera.

Therefore the proposal.

Research Questions (Re-Interpreted from the Proposal)

1. 1. How does the interface which, by design carries the same technological bias as that of camera, make an image — the relationship between the viewer, the viewed and the mediator.

The camera and the interface both exhibit a black box character. People who use them know very little about how they work. That black box ceases to exist after a point in the interaction with a human. What remains is a very simplified/ distorted over the hood personality of the product which exhibits a willing slave like behavior, rarely giving any idea of its intelligence or intent it possesses. This research question looks at the relationship of the interface-camera with the user, bringing forth the question of agency and intent.

2. How does the interface which, by design carries the same technological bias as that of camera, make an image — the mechanics of the black box

Another area that draws a parallel from the camera is the act of snapping to a desired location in the frame, to travel a physical distance by shrinking the optical scope of the frame. This action is invisible in an interface. Since the viewer/end user is only a consumer of this image , the act of snapping to a particular content is pre-meditated often with the help of cookies that parse user preference and behavioral data and often with collective filtering algorithms which take user to a specific place in the larger grid that “should be” relevant to him/her. The opening up of the camera and the opening up of the interface would reveal the components and their relationship with each other and the politics behind their being.

What Happens When?

The engagement is to be divided into three stages: Secondary research, Primary research and the last being Analysis and Presentation. Secondary research will entail studying available state of art, specifically text around interface and image making, technological aspects of back end frameworks in big data, predominantly in e commerce and social networking space (mapreduce, hive, hadoop etc). This activity is planned for duration of two months from the date of commencement of fellowship. Before the second stage begins the scope of the study would have narrowed and research questions fine-tuned to gain the required depth in the enquiry. This stage, Primary research would involve open ended discussions with data scientists, big data architects and designers around the areas identified at the end of secondary research. This activity including recruiting the participants, making the interview protocol, data collection and the high level analysis is scheduled to take two months. There would be a transitional overlap with the last activity which involves, compiling the findings and present them through presentation, report and installations. The last activity would need two months and would be the last activity of the engagement.


See the mindmap in full size [2].


[1] Baudrillard, Jean. 1995. Simulacra and Simulation. University of Michigan Press.

[2] The interactive version of the mindmap can be accessed from here:

Share this:


Print Print

Categorised in: ,
Published on: June 27, 2014


  • Sandeep Mertia:

    Hi Anand,

    I’m intrigued by your project proposal. I don’t know much about the area you’re working in, so please pardon my ignorance. I would be thankful if you could please explain in bit more detail or direct me to some text on the similarities between camera and interface? Intuitively, both seem to have very different affordances and functionalities — for example clicking a picture from a camera seems to me a very different ”interaction” from both user and tech. perspectives, from using a computer interface to tweet!

    Also, could you please add some more detail on what kind(s) of ‘interface’ you wish to focus on?

    Thanks. Btw, great mindmaps! :)

  • Hi Sandeep,

    thanks for your comment Sandeep. I guess the origin of the questions could be traced to the part of the proposal which says that the camera and the interface have the same technological bias. One here is not discussing any specific interface but rather the concept of an interface. Camera has been debated at length of being a vector with actors at both the end, however it constitutes the blind spot for each of the actors very often. The one who is being clicked does not what is being clicked. the camera thus becomes the knife by which we slice an event and within the event we slice the frame. we being the image takers. the image thus becomes de-contextualized and ready to be intergrated with any suitable rhetorical stream, distorting the reality that it was a part of and being constitutive of a reality that will get constructed for public consumption. This is the idea of image making with camera as a vector. There is another part to the camera which is that of being a progressive blackbox. from its origins, when the maker was most dominently the user, clearly understanding how the camera works to the current state where the camera hardly replicates straightline optics but delves towards image reprocessing and pixel enhancement the user has very little understanding of how camera works except being fooled into a simplistic mental model. Interface is similar to camera in both the regards. It is a vector. There are people who use the interface to get a specific task done and in order to get the task done, do certain actions over it or provide certain information. Higher the usage of the interface the more the information and action become indicators or sample of the larger social set that interacts with the interface creating a pool of data that is then analysed by the people on the other side of the vector to learn or in other words see the pose of the user. With advent of paradigms like map-reduce and platforms like hadoop, bigdata has created a high degree of fidelity by which this process of image making happens. This image/ data then is used to create another reality or tweak an existing one, which could be seen on a day to day basis with e commerce sites showing more of similar products to what you’ve bought, airline booking sites showing you tickets based on the frequently flown routes and carriers and so forth (also nesting in it the argument of camera or internet driven surveillance). This is the vector part of the interface. If you see the origins of interface from the ENIAC era till the pre gui phase the users were the makers, the one using an interface almost understood how the product worked. post GUI the interface has moved on becoming a blackbox which produced a simplified mental model, to the end user. One using google does not know the page rank algorithm and often also wonders what does the company do to make money. So with facebook and the like. They are consumed by the presentation layer which creates a very simplistic mental model making the other actor of the vector invisible.

    Subsequent posts will bring more detailed explanation to this phenomenon. Since most of the interfaces tangible (metro access card machines) to intangible (websites) are now in the business of parcing data for analysis ; the form and the behavior of the interface carries little relevance in this argument.

  • […] Society, and Culture. Vol. 1. John Wiley & Sons. [5] See the first post in the series: [6] For a quick scan see: [7] For a quick scan see: […]

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Comments Policy: We look forward to your comments on the posts published on this website. The comments should be constructive and conversational, as opposed to being offensive or merely critical. Personal attacks and rudeness are absolutely not tolerated. As the Editors of the website approve comments before they are published, there can be a slight delay in their appearance on the pages, especially during weekends. The comments are published under Creative Commons Attribution 2.5 India license. For any clarification, write to us at dak[at]sarai[dot]net.