U.S. retail giant Walmart has applied to the U.S. Patent & Trademark Office (USPTO) to patent a blockchain system for deliveries, according to an official patent document released August 30.
Walmart has applied for a number of blockchain-related patents in the U.S. in the past year. According to Investopedia, blockchain technology enhancement is mainly being used by the retailed in order to “help Walmart keep pace with its rivals,” such as Amazon.
Recently, Walmart applied for a patent on systems and methods for managing smart appliances via blockchain. The tech would allow users to customize levels of access and control for appliances such as portable computing devices.
In mid-July, the retail giant patented the technology for a blockchain-powered delivery management system that can keep delivered items safe until their purchasers are able to sign for and collect them.
After Kubernetes,Istio is the most popular cloud-native technology. It is a service mesh that securely connects multiple microservices of an application. Think of Istio as an internal and external load balancer with a policy-driven firewall with support for comprehensive metrics. The reason why developers and operators love Istio is the non-intrusive deployment pattern. Almost any Kubernetes service can be seamlessly integrated with Istio without explicit code or configuration changes.
Google recently announced a managed Istio service on GCP. Apart from Google, IBM, Pivotal, Red Hat, Tigera and Weaveworks are the active contributors and supporters of the project.
Istio presents an excellent opportunity for ISVs to deliver custom solutions and tools to enterprises. This project is bound to become one of the core building blocks of cloud-native platforms. I expect every managed Kubernetes service to have a hosted Istio service.
Prometheus
Prometheus is a cloud-native monitoring tool for workloads deployed on Kubernetes. It plugs a critical gap that exists in the cloud-native world through comprehensive metrics and rich dashboards.
Helm
If Kubernetes is the new OS, Helm is the application installer. Designed on the lines of Debian packages and Red Hat Linux RPMs, Helm brings the ease and power of deploying cloud-native workloads with a single command.
Spinnaker
One of the promises of cloud-native technology is the rapid delivery of software. Spinnaker, an open source project initially built at Netflix delivers that promise. It is a release management tool that adds velocity to deploying cloud-native applications.
Kubeless
Event-driven computing is becoming an integral part of modern application architecture. Functions as a Service (FaaS) is one of the delivery models of serverless computing which complements containers through event-based invocation. Modern applications will have services packaged as containers and functions running within the same environment.
Earlier this week, Apple ($NASDAQ:AAPL) acquired augmented reality (AR) lens and glasses company Akonia Holographics ($AKONIAHOLOGRAPHICS), which spawned plenty of speculation on Apple getting serious about AR.
Augmented reality overlays digital information over the real world and differs from virtual reality (VR), where the whole environment is simulated. Akonia describes its AR product as “thin, transparent smart glass lenses that display vibrant, full-color, wide field-of-view images.”
“Digital maps have become essential tools of our everyday lives, yet despite their ubiquity, they are still in their infancy. From urban mobility to indoor positioning, from LIDAR to Augmented Reality, advances in technology and new kinds of data are powering innovations in all areas of digital mapping. If you love maps and are passionate about what is possible, you will be in great company.”
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
model-based programming, in which machines do most of the coding work and are able to test as they go.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
Künstliche Intelligenzen und Roboter werden in unserem Leben immer selbstverständlicher. Was erwarten wir von den intelligenten Maschinen, wie verändert ihre Präsenz in unserem Alltag und die Interaktion mit ihnen unser Selbstverständnis und unseren Umgang mit anderen Menschen? Müssen wir Roboter als eine Art menschliches Gegenüber anerkennen? Und welche Freiheiten wollen wir den Maschinen einräumen? Es ist dringend an der Zeit, die ethischen und rechtlichen Fragen zu klären.
1954 wurdeUnimate, der erste Industrieroboter , von George Devol entwickelt [1]. Insbesondere in den 1970er Jahren haben viele produzierende Gewerbe eine Roboterisierung ihrer Arbeit erfahren (beispielsweise die Automobil- und Druckindustrie).
Definition eines Industrieroboters in der ISO 8373 (2012) vergegenwärtigt: »Ein Roboter ist ein frei und wieder programmierbarer, multifunktionaler Manipulator mit mindestens drei unabhängigen Achsen, um Materialien, Teile, Werkzeuge oder spezielle Geräte auf programmierten, variablen Bahnen zu bewegen zur Erfüllung der verschiedensten Aufgaben«.
Ethische Überlegungen zu Robotik und Künstlicher Intelligenz
Versucht man sich einen Überblick über die verschiedenen ethischen Probleme zu verschaffen, die mit dem Aufkommen von ›intelligenten‹ und in jeder Hinsicht (Präzision, Geschwindigkeit, Kraft, Kombinatorik und Vernetzung) immer mächtigeren Robotern verbunden sind, so ist es hilfreich, diese Probleme danach zu unterscheiden, ob sie
1. das Vorfeld der Ethik,
2. das bisherige Selbstverständnis menschlicher Subjekte (Anthropologie) oder
3. normative Fragen im Sinne von: »Was sollen wir tun?« betreffen.
Die folgenden Überlegungen geben einen kurzen Aufriss, mit welchen Fragen wir uns jeweils beschäftigen sollten, wie die verschiedenen Fragenkreise zusammenhängen, und woran wir uns in unseren Antworten orientieren können.
Aufgabe der Ethik ist es, solche moralischen Meinungen auf ihre Begründung und Geltung hin zu befragen und so zu einem geschärften ethischen Urteil zu kommen, das idealiter vor der Allgemeinheit moralischer Subjekte verantwortet werden kann und in seiner Umsetzung ein »gelungenes Leben mit und für die Anderen, in gerechten Institutionen« [8] ermöglicht. Das ist eine erste vage Richtungsangabe.
Normative Fragen lassen sich am Ende nur ganz konkret anhand einer bestimmten Situation bearbeiten. Entsprechend liefert die Ethik hier keine pauschalen Urteile wie: »Roboter sind gut/schlecht«, »Künstliche Intelligenz dient dem guten Leben/ist dem guten Leben abträglich«.
Thirty students registered for Arizona State University Online’s general biology course are using ASU-supplied virtual reality (VR) headsets for a variety of required lab exercises
The VR equipment, which costs ASU $399 per student, allows learners to complete lab assignments in virtual space using goggles and a controller to maneuver around a simulated lab. Content for the online course was developed and assessed by ASU biology professors and was evaluated this summer. Students also can use their own VR headsets and access the content on their laptops, as 370 other students are doing.
A university official told Campus Technology the initiative will help online students have the experiences provided in brick-and-mortar labs as well as new ones that were impossible previously. The effort also will ease a problem on campus with limited lab space.
Similar labs are planned for the University of Texas at San Antonio and McMaster University in Canada, according to a blog post from Google, which partnered with the ed-tech company Labster to create the virtual labs. In addition, virtual labs also are available at eight community colleges in California, offering IT and cybersecurity skills instruction.
About half of colleges have space dedicated to VR, with adoption expected to increase as technology costs go down, according to a recent survey by nonprofit consortium Internet2. The survey found that 18% of institutions have “fully deployed” VR and are increasingly making it available to online students, while half are testing or have not yet deployed the technology.
Colleges are using VR for a variety of purposes, from classroom instruction to admissions recruiting to career training.
In addition, because the use of VR is growing in K–12 education, students will expect to use it in college.
Some schools invest in technology but never find the right way to teach with it. Here, a school library specialist shares how she turned teachers and students from skeptics into evangelists.
Far too often, cybersecurity awareness-raising training fails to account for how people learn and proven ways to change behaviors. The cybersecurity community too easily falls into the trap of thinking that “humans are the weakest link.” In this talk, Dr. Jessica Barker will argue that, if humans are the weakest link, then they are our weakest link as an industry. With reference to sociology, psychology, and behavioral economics, as well as lessons from her professional experience, Jessica will discuss why a better understanding of human nature needs to be a greater priority for the cybersecurity community.
Outcomes: Explore how we can apply knowledge from other disciplines to improve cybersecurity awareness-raising training and communications * Understand where the cybersecurity industry can improve with regards to awareness, behavior, and culture * Develop ideas to improve how you communicate cybersecurity messages and conduct awareness-raising training
Please have a collection of stories regrading the recent events in Chemnitz, Germany.
The articles from different outlets allow to study not only the event and the controversy immigrants / nativism, but also the phenomenon “fake news” in the post-truth era.