epistemology should play an active role in the design of future AR systems and practices.
its users may also be exposed to the serious danger of being unable to tell reality and augmented reality apart.
Most modern augmented reality systems combine the input from hardware
components such as digital cameras, accelerometers, global positioning systems (GPS),
gyroscopes, solid state compasses, and wireless sensors with simultaneous localization and
mapping (SLAM) software
The above examples make it obvious that AR has the potential to permeate and
enrich our everyday lives in a variety of ways. As AR technologies become less intrusive and
more transparent, moving from hand held devices, to AR glasses and finally to contact lenses,
AR will possibly not only penetrate every aspect of our lives but will become a constant,
additional layer to physical reality that users will be practically unable to disengage from.
Short films Sight (https://vimeo.com/46304267) and Hyper-Reality
(https://vimeo.com/166807261) provide good tasters of how the augmented future might
soon look like.
Contrary to other forms of extended
cognitive systems, AR is specifically designed to generate and operate on the basis of unreal
yet deceivingly truth-like mimicries of the external world in a way that users won’t be able to
distinguish augmented images from actual images of the world.
AR therefore has the potential to both extend and distract our organismic epistemic
capacities.
AR developers would have to make sure that all augmentations bear features that would allow them to clearly and immediately stand out from the physical elements in the world without the need of unrealistically burdensome checks on the part of the users. The design of future AR systems should not pose unrealistic demands on the users’ cognitively integrated nature. Reality augmentations should automatically stand out as such, leaving minimal room for confusion or misinterpretation.
If we can’t teach machines to internalise human values and make decisions based on them, we must accept – and ensure – that AI is of limited use to us.
The so-called “Value Alignment Problem” – how to get AI to respect and conform to human values – is arguably the most important, if vexing, problem faced by AI developers today.
Stuart Russell, a leading AI scientist at Berkeley, offers an intriguing solution. Let’s design AI so that its goals are unclear. We then allow it to fill in the gaps by observing human behaviour. By learning its values from humans, the AI’s goals will be our goals.
Stiegler discovered philosophy in prison for robbery and was mentored by Derrida. His 3-volume Technics and Time, evoking Heidegger’s Being and Time, takes up the grammatological rather than deconstructive path taken by Derrida in the 1970s. Stiegler’s research on intergenerational care, phamakology, and algorithmic governance continue with his colleagues at the IRI in Paris and around the world. I first met Bernard when he visited Madison in 2015, and I gave him a tour of DesignLab. At the suggestion of collaborator Ana Vujanovic, we reached out to him and were collaborating on a lecture performance over the past year or so. I had tickets and hotel reserved to Paris when COVID struck. Disappointed, we Zoomed and discussed how to proceed and possible workshops, still being pursued with IRI. He passed away last summer, due to cancer. In this 2-hour interview with Zero Books, Stiegler discusses Marx and Greenspan on the proletarianization of intellect achieved by IT, his rejection of defunding the police, and COVID and the positions taken to it by Zizek and Agamben. Throughout the interview, Bernard’s patient passion and clarity of thought shine through. “Making a Mouk” is a short, accessible text; https://www.dropbox.com/…/Bernard_Stiegler_Making_a… https://www.youtube.com/watch?v=rd-9LPVilmM