In 2018 we witnessed a clash of titans as government and tech companies collided on privacy issues around collecting, culling and using personal data. From GDPR to Facebook scandals, many tech CEOs were defending big data, its use, and how they’re safeguarding the public.
1. Companies will face increased pressure about the data AI-embedded services use.
2. Public concern will lead to AI regulations. But we must understand this tech too.
In 2018, the National Science Foundation invested $100 million in AI research, with special support in 2019 for developing principles for safe, robust and trustworthy AI; addressing issues of bias, fairness and transparency of algorithmic intelligence; developing deeper understanding of human-AI interaction and user education; and developing insights about the influences of AI on people and society.
This investment was dwarfed by DARPA—an agency of the Department of Defence—and its multi-year investment of more than $2 billion in new and existing programs under the “AI Next” campaign. A key area of the campaign includes pioneering the next generation of AI algorithms and applications, such as “explainability” and common sense reasoning.
Federally funded initiatives, as well as corporate efforts (such as Google’s “What If” tool) will lead to the rise of explainable AI and interpretable AI, whereby the AI actually explains the logic behind its decision making to humans. But the next step from there would be for the AI regulators and policymakers themselves to learn about how these technologies actually work. This is an overlooked step right now that Richard Danzig, former Secretary of the U.S. Navy advises us to consider, as we create “humans-in-the-loop” systems, which require people to sign off on important AI decisions.
3. More companies will make AI a strategic initiative in corporate social responsibility.
DataSense, a data management platform developed by Brightbytes.
DataSense is a set of professional services that work with K-12 districts to collect data from different data systems, translate them into unified formats and aggregate that information into a unified dashboard for reporting purposes.
DataSense traces its origins to Authentica Solutions, an education data management company founded in 2013.
A month later, BrightBytes acquired Authentica. The deal was hailed as a “major milestone in the industry” and appeared to be a complement to BrightBytes’ flagship offering, Clarity, a suite of data analytics tools that help educators understand the impact of technology spending and usage on student outcomes.
Of the “Big Five” technology giants, Microsoft has become the most acqui-hungry as of late in the learning and training space. In recent years it purchased several consumer brand names whose services reach into education, including LinkedIn (which owns Lynda.com, now a part of the LinkedIn Learning suite), Minecraft (which has been adapted for use in the classroom) and Github (which released an education bundle).
Last year, Microsoft also acquired a couple of smaller education tools, including Flipgrid, a video-discussion platform popular among teachers, and Chalkup, whose services have been rolled into Microsoft Teams, its competitor to Slack.
Artificial intelligence (AI) and machine learning are no longer fantastical prospects seen only in science fiction. Products like Amazon Echo and Siri have brought AI into many homes,
Kelly Calhoun Williams, an education analyst for the technology research firm Gartner Inc., cautions there is a clear gap between the promise of AI and the reality of AI.
Artificial intelligence is a broad term used to describe any technology that emulates human intelligence, such as by understanding complex information, drawing its own conclusions and engaging in natural dialog with people.
Machine learning is a subset of AI in which the software can learn or adapt like a human can. Essentially, it analyzes huge amounts of data and looks for patterns in order to classify information or make predictions. The addition of a feedback loop allows the software to “learn” as it goes by modifying its approach based on whether the conclusions it draws are right or wrong.
AI can process far more information than a human can, and it can perform tasks much faster and with more accuracy. Some curriculum software developers have begun harnessing these capabilities to create programs that can adapt to each student’s unique circumstances.
For instance, a Seattle-based nonprofit company calledEnlearn has developed an adaptive learning platform that uses machine learning technology to create highly individualized learning paths that can accelerate learning for every student. (My note: about learning and technology, Alfie Kohn in https://blog.stcloudstate.edu/ims/2018/09/11/educational-technology/)
GoGuardian, a Los Angeles company, uses machine learning technology to improve the accuracy of its cloud-based Internet filtering and monitoring software for Chromebooks. (My note: that smells Big Brother).Instead of blocking students’ access to questionable material based on a website’s address or domain name, GoGuardian’s software uses AI to analyze the actual content of a page in real time to determine whether it’s appropriate for students. (my note: privacy)
serious privacy concerns. It requires an increased focus not only on data quality and accuracy, but also on the responsible stewardship of this information. “School leaders need to get ready for AI from a policy standpoint,” Calhoun Williams said. For instance: What steps will administrators take to secure student data and ensure the privacy of this information?
despite China’s many technological advances, in this new cyberspace race, the West had the lead.
Xi knew he had to act. Within twelve months he revealed his plan to make China a science and technology superpower. By 2030 the country would lead the world in AI, with a sector worth $150 billion. How? By teaching a generation of young Chinese to be the best computer scientists in the world.
Today, the US tech sector has its pick of the finest minds from across the world, importing top talent from other countries – including from China. Over half of Bay Area workers are highly-skilled immigrants. But with the growth of economies worldwide and a Presidential administration hell-bent on restricting visas, it’s unclear that approach can last.
In the UK the situation is even worse. Here, the government predicts there’ll be a shortfall of three million employees for high-skilled jobs by 2022 – even before you factor in the immigration crunch of Brexit. By contrast, China is plotting a homegrown strategy of local and national talent development programs. It may prove a masterstroke.
In 2013 the city’s teenagers gained global renown when they topped the charts in the PISA tests administered every three years by the OECD to see which country’s kids are the smartest in the world. Aged 15, Shanghai students were on average three full years ahead of their counterparts in the UK or US in maths and one-and-a-half years ahead in science.
Teachers, too, were expected to be learners. Unlike in the UK, where, when I began to teach a decade ago, you might be working on full-stops with eleven-year-olds then taking eighteen-year-olds through the finer points of poetry, teachers in Shanghai specialised not only in a subject area, but also an age-group.
Shanghai’s success owed a lot to Confucian tradition, but it fitted precisely the best contemporary understanding of how expertise is developed. In his book Why Don’t Kids Like School? cognitive Dan Willingham explains that complex mental skills like creativity and critical thinking depend on our first having mastered the simple stuff. Memorisation and repetition of the basics serve to lay down the neural architecture that creates automaticity of thought, ultimately freeing up space in our working memory to think big.
Seung-bin Lee, a seventeen-year-old high school graduate, told me of studying fourteen hours a day, seven days a week, for the three years leading up to the Suneung, the fearsome SAT exam taken by all Korean school leavers on a single Thursday each November, for which all flights are grounded so as not to break students’ concentration during the 45 minutes of the English listening paper.
Korea’s childhoods were being lost to a relentless regime of studying, crushed in a top-down system that saw them as cyphers rather than kids.
A decade ago, we consoled ourselves that although kids in China and Korea worked harder and did better on tests than ours, it didn’t matter. They were compliant, unthinking drones, lacking the creativity, critical thinking or entrepreneurialism needed to succeed in the world. No longer. Though there are still issues with Chinese education – urban centres like Shanghai and Hong Kong are positive outliers – the country knows something that we once did: education is the one investment on which a return is guaranteed. China is on course to becoming the first education superpower.
Troublingly, where education in the UK and US has been defined by creativity and independent thinking – Shanghai teachers told me of visits to our schools to learn about these qualities – our direction of travel is now away from those strengths and towards exams and standardisation, with school-readiness tests in the pipeline and UK schools minister Nick Gibb suggesting kids can beat exam stress by sitting more of them. Centres of excellence remain, but increasingly, it feels, we’re putting our children at risk of losing out to the robots, while China is building on its strong foundations to ask how its young people can be high-tech pioneers. They’re thinking big – we’re thinking of test scores.
soon “digital information processing” would be included as a core subject on China’s national graduation exam – the Gaokao – and pictured classrooms in which students would learn in cross-disciplinary fashion, designing mobile phones for example, in order to develop design, engineering and computing skills. Focusing on teaching kids to code was short-sighted, he explained. “We still regard it as a language between human and computer.” (My note: they are practically implementing the Finland’s attempt to rebuild curricula)
“If your plan is for one year,” went an old Chinese saying, “plant rice. If your plan is for ten years, plant trees. If your plan is for 100 years, educate children.” Two and half thousand years later chancellor Gwan Zhong might update his proverb, swapping rice for bitcoin and trees for artificial intelligence, but I’m sure he’d stand by his final point.
“The challenge is to make data discoverable, usable, assessable, intelligible, and interpretable, and do so for extended periods of time…To restate the premise of this book, the value of data lies in their use. Unless stakeholders can agree on what to keep and why, and invest in the invisible work necessary to sustain knowledge infrastructures, big data and little data alike will become no data.”
he premise that data are not natural objects with their own essence, Borgman rather explores the different values assigned to them, as well as their many variations according to place, time, and the context in which they are collected. It is specifically through six “provocations” that she offers a deep engagement with different aspects of the knowledge industry. These include the reproducibility, sharing, and reuse of data; the transmission and publication of knowledge; the stability of scholarly knowledge, despite its increasing proliferation of forms and modes; the very porosity of the borders between different areas of knowledge; the costs, benefits, risks, and responsibilities related to knowledge infrastructure; and finally, investment in the sustainable acquisition and exploitation of data for scientific research.
beyond the six provocations, there is a larger question concerning the legitimacy, continuity, and durability of all scientific research—hence the urgent need for further reflection, initiated eloquently by Borgman, on the fact that “despite the media hyperbole, having the right data is usually better than having more data”
o Data management (Pages xviii-xix)
o Data definition (4-5 and 18-29)
p. 5 big data and little data are only awkwardly analogous to big science and little science. Modern science, or big science inDerek J. de Solla Price (https://en.wikipedia.org/wiki/Big_Science) is characterized by international, collaborative efforts and by the invisible colleges of researchers who know each other and who exchange information on a formal and informal basis. Little science is the three hundred years of independent, smaller-scale work to develop theory and method for understanding research problems. Little science is typified by heterogeneous methods, heterogeneous data and by local control and analysis.
p. 8 The Long Tail
a popular way of characterizing the availability and use of data in research areas or in economic sectors. https://en.wikipedia.org/wiki/Long_tail
o Provocations (13-15)
o Digital data collections (21-26)
o Knowledge infrastructures (32-35)
o Open access to research (39-42)
o Open technologies (45-47)
o Metadata (65-70 and 79-80)
o Common resources in astronomy (71-76)
o Ethics (77-79)
o Research Methods and data practices, and, Sensor-networked science and technology (84-85 and 106-113)
o Knowledge infrastructures (94-100)
o COMPLETE survey (102-106)
o Internet surveys (128-143)
o Internet survey (128-143)
o Twitter (130-133, 138-141, and 157-158(
o Pisa Clark/CLAROS project (179-185)
o Collecting Data, Analyzing Data, and Publishing Findings (181-184)
o Buddhist studies 186-200)
o Data citation (241-268)
o Negotiating authorship credit (253-256)
o Personal names (258-261)
o Citation metrics (266-209)
o Access to data (279-283)
over the last four years, 49 states and the District of Columbia have introduced 410 bills related to student data privacy, and 36 states have passed 85 new education data privacy laws. Also, since 2014, 19 states have passed laws that in some way address the work done by researchers.
researchers need to get better at communicating about their projects, especially with non-researchers.
One approach to follow in gaining trust “from parents, advocates and teachers” uses the acronym CUPS:
Collection: What data is collected by whom and from whom;
Use: How the data will be used and what the purpose of the research is;
Protection: What forms of data security protection are in place and how access will be limited; and
Sharing: How and with whom the results of the data work will be shared.
Second, researchers must pin down how to share data without making it vulnerable to theft.
Third, researchers should build partnerships of trust and “mutual interest” pertaining to their work with data. Those alliances may involve education technology developers, education agencies both local and state, and data privacy stakeholders.
The growing use of data mining software in online education has great potential to support student success by identifying and reaching out to struggling students and streamlining the path to graduation. This can be a challenge for institutions that are using a variety of technology systems that are not integrated with each other. As institutions implement learning management systems, degree planning technologies, early alert systems, and tutor scheduling that promote increased interactions among various stakeholders, there is a need for centralized aggregation of these data to provide students with holistic support that improves learning outcomes. Join us to hear from an institutional exemplar who is building solutions that integrate student data across platforms. Then work with peers to address challenges and develop solutions of your own.