PROGRAM FEES $2,300 STARTS ON November 28, 20182 months, online 6-8 hours per week
A Digital Revolution Is Underway.
In a rapidly expanding digital marketplace, legacy companies without a clear digital transformation strategy are being left behind. How can we stay on top of rapid—and sometimes radical—change? How can we position our organizations to take advantage of new technologies? How can we track and combat the security threats facing all of us as we are swept forward into the future?
Who is this Program for?
Professionals in traditional companies poised to implement strategic change, as well as entrepreneurs seeking to harness the opportunities afforded by new technologies, will learn the fundamentals of digital transformation and secure the necessary tools to navigate their enterprise to a digital platform.
Participants come from a wide range of industries and include C-suite executives, business consultants, corporate attorneys, risk officers, marketing, R&D, and innovation enablers.
Your Learning Journey
This online program takes you through the fundamentals of digital technologies transforming our world today. Led by MIT faculty at the forefront of data science, participants will learn the history and application of transformative technologies such as blockchain, artificial intelligence, cloud computing, IoT, and cybersecurity as well as the implications of employing—or ignoring—digitalization.
Artificial intelligence (AI) and machine learning are no longer fantastical prospects seen only in science fiction. Products like Amazon Echo and Siri have brought AI into many homes,
Kelly Calhoun Williams, an education analyst for the technology research firm Gartner Inc., cautions there is a clear gap between the promise of AI and the reality of AI.
Artificial intelligence is a broad term used to describe any technology that emulates human intelligence, such as by understanding complex information, drawing its own conclusions and engaging in natural dialog with people.
Machine learning is a subset of AI in which the software can learn or adapt like a human can. Essentially, it analyzes huge amounts of data and looks for patterns in order to classify information or make predictions. The addition of a feedback loop allows the software to “learn” as it goes by modifying its approach based on whether the conclusions it draws are right or wrong.
AI can process far more information than a human can, and it can perform tasks much faster and with more accuracy. Some curriculum software developers have begun harnessing these capabilities to create programs that can adapt to each student’s unique circumstances.
GoGuardian, a Los Angeles company, uses machine learning technology to improve the accuracy of its cloud-based Internet filtering and monitoring software for Chromebooks. (My note: that smells Big Brother).Instead of blocking students’ access to questionable material based on a website’s address or domain name, GoGuardian’s software uses AI to analyze the actual content of a page in real time to determine whether it’s appropriate for students. (my note: privacy)
serious privacy concerns. It requires an increased focus not only on data quality and accuracy, but also on the responsible stewardship of this information. “School leaders need to get ready for AI from a policy standpoint,” Calhoun Williams said. For instance: What steps will administrators take to secure student data and ensure the privacy of this information?
James Dixon, the CTO of Pentaho is credited with naming the concept of a data lake. He uses the following analogy:
“If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.”
A data lake holds data in an unstructured way and there is no hierarchy or organization among the individual pieces of data. It holds data in its rawest form—it’s not processed or analyzed. Additionally, a data lakes accepts and retains all data from all data sources, supports all data types and schemas (the way the data is stored in a database) are applied only when the data is ready to be used.
What is a data warehouse?
A data warehouse stores data in an organized manner with everything archived and ordered in a defined way. When a data warehouse is developed, a significant amount of effort occurs during the initial stages to analyze data sources and understand business processes.
Data lakes retain all data—structured, semi-structured and unstructured/raw data. It’s possible that some of the data in a data lake will never be used. Data lakes keep all data as well. A data warehouse only includes data that is processed (structured) and only the data that is necessary to use for reporting or to answer specific business questions.
Since a data lake lacks structure, it’s relatively easy to make changes to models and queries.
Data scientists are typically the ones who access the data in data lakes because they have the skill-set to do deep analysis.
Since data warehouses are more mature than data lakes, the security for data warehouses is also more mature.
Between the “dumb” fixed algorithms and true AI lies the problematic halfway house we’ve already entered with scarcely a thought and almost no debate, much less agreement as to aims, ethics, safety, best practice. If the algorithms around us are not yet intelligent, meaning able to independently say “that calculation/course of action doesn’t look right: I’ll do it again”, they are nonetheless starting to learn from their environments. And once an algorithm is learning, we no longer know to any degree of certainty what its rules and parameters are. At which point we can’t be certain of how it will interact with other algorithms, the physical world, or us. Where the “dumb” fixed algorithms – complex, opaque and inured to real time monitoring as they can be – are in principle predictable and interrogable, these ones are not. After a time in the wild, we no longer know what they are: they have the potential to become erratic. We might be tempted to call these “frankenalgos” – though Mary Shelley couldn’t have made this up.
Twenty years ago, George Dyson anticipated much of what is happening today in his classic book Darwin Among the Machines. The problem, he tells me, is that we’re building systems that are beyond our intellectual means to control. We believe that if a system is deterministic (acting according to fixed rules, this being the definition of an algorithm) it is predictable – and that what is predictable can be controlled. Both assumptions turn out to be wrong.“It’s proceeding on its own, in little bits and pieces,” he says. “What I was obsessed with 20 years ago that has completely taken over the world today are multicellular, metazoan digital organisms, the same way we see in biology, where you have all these pieces of code running on people’s iPhones, and collectively it acts like one multicellular organism.“There’s this old law called Ashby’s law that says a control system has to be as complex as the system it’s controlling, and we’re running into that at full speed now, with this huge push to build self-driving cars where the software has to have a complete model of everything, and almost by definition we’re not going to understand it. Because any model that we understand is gonna do the thing like run into a fire truck ’cause we forgot to put in the fire truck.”
Walsh believes this makes it more, not less, important that the public learn about programming, because the more alienated we become from it, the more it seems like magic beyond our ability to affect. When shown the definition of “algorithm” given earlier in this piece, he found it incomplete, commenting: “I would suggest the problem is that algorithm now means any large, complex decision making software system and the larger environment in which it is embedded, which makes them even more unpredictable.” A chilling thought indeed. Accordingly, he believes ethics to be the new frontier in tech, foreseeing “a golden age for philosophy” – a view with which Eugene Spafford of Purdue University, a cybersecurity expert, concurs. Where there are choices to be made, that’s where ethics comes in.
our existing system of tort law, which requires proof of intention or negligence, will need to be rethought. A dog is not held legally responsible for biting you; its owner might be, but only if the dog’s action is thought foreseeable.
As we wait for a technological answer to the problem of soaring algorithmic entanglement, there are precautions we can take. Paul Wilmott, a British expert in quantitative analysis and vocal critic of high frequency trading on the stock market, wryly suggests “learning to shoot, make jam and knit”
The venerable Association for Computing Machinery has updated its code of ethics along the lines of medicine’s Hippocratic oath, to instruct computing professionals to do no harm and consider the wider impacts of their work.
Privacy researcher Lukasz Olejnik disagreed, noting that the change carried large ramifications for the affected users. “Moving around one and a half billion users into other jurisdictions is not a simple copy-and-paste exercise,” he said.
Facebook To Offer Users Opt-Outs That Comply With New European Privacy Rules
As we reported earlier this week, a federal judge in California ruled that Facebook could be sued in a class-action lawsuit brought by users in Illinois who say the social media company improperly used facial recognition to upload photographs.
THERE ARE HACKABLE security flaws in software. And then there are those that don’t even require hacking at all—just a knock on the door, and asking to be let in. Apple’s macOS High Sierra has the second kind.
Kaspersky Lab advised those who do not use anti-virus products to restrict execution of certain files (C:\Windows\infpub.dat, C:\Windows\cscc.dat) and shut down the Windows Management Instrumentation (WMI) service. My note: let the wolf in the shed with sheep.
The source of the attack remained undetermined, but earlier this month the head of Microsoft, Brad Smith, pinned the blame for it on North Korea, which allegedly used cyber tools or weapons that were stolen from the National Security Agency in the United States. The top executive, however, did not provide evidence to back his claims.
New ransomware attack hits Russia and spreads around globe