“Using #augmentedreality (#AR) and #blockchain technologies, the game featured ‘Vatoms’, virtual objects that players can interact with.
Due to pandemic restrictions, the game, originally developed as an outdoor activity in other markets, was adapted to allow people to hunt for the packs in and around their own homes. Packs contained either Playstation symbol chips or an instant-win prize, which would then be stored in a digital wallet.”
Imagine, if it was not Doritos, but bones to build a skeletal system (Biology, nursing, medicine, etc.), philosophers (Philosophy, political science, social sciences) and the rewards or badges in the leaderboard. #gaming and #gamification
Stiegler discovered philosophy in prison for robbery and was mentored by Derrida. His 3-volume Technics and Time, evoking Heidegger’s Being and Time, takes up the grammatological rather than deconstructive path taken by Derrida in the 1970s. Stiegler’s research on intergenerational care, phamakology, and algorithmic governance continue with his colleagues at the IRI in Paris and around the world. I first met Bernard when he visited Madison in 2015, and I gave him a tour of DesignLab. At the suggestion of collaborator Ana Vujanovic, we reached out to him and were collaborating on a lecture performance over the past year or so. I had tickets and hotel reserved to Paris when COVID struck. Disappointed, we Zoomed and discussed how to proceed and possible workshops, still being pursued with IRI. He passed away last summer, due to cancer. In this 2-hour interview with Zero Books, Stiegler discusses Marx and Greenspan on the proletarianization of intellect achieved by IT, his rejection of defunding the police, and COVID and the positions taken to it by Zizek and Agamben. Throughout the interview, Bernard’s patient passion and clarity of thought shine through. “Making a Mouk” is a short, accessible text; https://www.dropbox.com/…/Bernard_Stiegler_Making_a… https://www.youtube.com/watch?v=rd-9LPVilmM
Last year, Australia’s Chief Scientist Alan Finkel suggested that we in Australia should become “human custodians”. This would mean being leaders in technological development, ethics, and human rights.
A recent report from the Australian Council of Learned Academies (ACOLA) brought together experts from scientific and technical fields as well as the humanities, arts and social sciences to examine key issues arising from artificial intelligence.
A similar vision drives Stanford University’s Institute for Human-Centered Artificial Intelligence. The institute brings together researchers from the humanities, education, law, medicine, business and STEM to study and develop “human-centred” AI technologies.
Meanwhile, across the Atlantic, the Future of Humanity Institute at the University of Oxford similarly investigates “big-picture questions” to ensure “a long and flourishing future for humanity”.
The IT sector is also wrestling with the ethical issues raised by rapid technological advancement. Microsoft’s Brad Smith and Harry Shum wrote in their 2018 book The Future Computed that one of their “most important conclusions” was that the humanities and social sciences have a crucial role to play in confronting the challenges raised by AI
Without training in ethics, human rights and social justice, the people who develop the technologies that will shape our future could make poor decisions.