70 percent of teens now say they use social media more than once a day, compared to 34 percent of teens in 2012.
Snapchat is now the most popular social media platform among teens, with 41 percent saying it’s the one use most frequently.
35 percent of teens now say texting is their preferred mode of communication with friends, more than the 32 percent who prefer in-person communication. In 2012, 49 percent of teens preferred in-person communication.
One-fourth of teens say using social media makes them feel less lonely, compared to 3 percent who say it makes them feel more lonely.
Nearly three-fourths of teens believe tech companies manipulate them to get them to spend more time on their devices and platforms.
Back in 2012, Facebook dominated the landscape, and social media was something for teens to periodically check in on.
In 2018, though, “social media” is no longer a monolith. Teens now communicate, express themselves, share experiences and ideas, rant, gossip, flirt, plan, and stay on top of current events using a mix of platforms that compete ferociously for their attention.
Sixty-three percent of teens say they use Snapchat, and 41 percent say it’s the platform they use most frequently.
Instagram, meanwhile, is used by 61 percent of teens.
And Facebook’s decline among teens has been “precipitous,” according to the new report. Just 15 percent of teens now say Facebook is their main social media site, down from 68 percent six years ago
James Dixon, the CTO of Pentaho is credited with naming the concept of a data lake. He uses the following analogy:
“If you think of a datamart as a store of bottled water – cleansed and packaged and structured for easy consumption – the data lake is a large body of water in a more natural state. The contents of the data lake stream in from a source to fill the lake, and various users of the lake can come to examine, dive in, or take samples.”
A data lake holds data in an unstructured way and there is no hierarchy or organization among the individual pieces of data. It holds data in its rawest form—it’s not processed or analyzed. Additionally, a data lakes accepts and retains all data from all data sources, supports all data types and schemas (the way the data is stored in a database) are applied only when the data is ready to be used.
What is a data warehouse?
A data warehouse stores data in an organized manner with everything archived and ordered in a defined way. When a data warehouse is developed, a significant amount of effort occurs during the initial stages to analyze data sources and understand business processes.
Data
Data lakes retain all data—structured, semi-structured and unstructured/raw data. It’s possible that some of the data in a data lake will never be used. Data lakes keep all data as well. A data warehouse only includes data that is processed (structured) and only the data that is necessary to use for reporting or to answer specific business questions.
Agility
Since a data lake lacks structure, it’s relatively easy to make changes to models and queries.
Users
Data scientists are typically the ones who access the data in data lakes because they have the skill-set to do deep analysis.
Security
Since data warehouses are more mature than data lakes, the security for data warehouses is also more mature.
It will be eons before AI thinks with a limbic brain, let alone has consciousness
AI programmes themselves generate additional computer programming code to fine-tune their algorithms—without the need for an army of computer programmers. In AI speak, this is now often referred to as “machine learning”.
An AI programme “catastrophically forgets” the learnings from its first set of data and would have to be retrained from scratch with new data. The website futurism.com says a completely new set of algorithms would have to be written for a programme that has mastered face recognition, if it is now also expected to recognize emotions. Data on emotions would have to be manually relabelled and then fed into this completely different algorithm for the altered programme to have any use. The original facial recognition programme would have “catastrophically forgotten” the things it learnt about facial recognition as it takes on new code for recognizing emotions. According to the website, this is because computer programmes cannot understand the underlying logic that they have been coded with.
Irina Higgins, a senior researcher at Google DeepMind, has recently announced that she and her team have begun to crack the code on “catastrophic forgetting”.
As far as I am concerned, this limbic thinking is “catastrophic thinking” which is the only true antipode to AI’s “catastrophic forgetting”. It will be eons before AI thinks with a limbic brain, let alone has consciousness.
++++++++++++++++++
Stephen Hawking warns artificial intelligence could end mankind
By Rory Cellan-JonesTechnology correspondent,2 December 2014
the Fourth Industrial Revolution, or Industry 4.0. The adoption of cyber-physical systems, the Internet of Things and the Internet of Systems
While in some ways it’s an extension of the computerization of the 3rd Industrial Revolution (Digital Revolution), due to the velocity, scope and systems impact of the changes of the fourth revolution, it is being considered a distinct era. The Fourth Industrial Revolution is disrupting almost every industry in every country and creating massive change in a non-linear way at unprecedented speed.
In his book,The Fourth Industrial Revolution, professor Klaus Schwab, founder and executive chairman of the World Economic Forum, describes the enormous potential for the technologies of the Fourth Industrial Revolution as well as the possible risks.
Our workplaces and organizations are becoming “smarter” and more efficient as machines, and humans start to work together, and we use connected devices to enhance our supply chains and warehouses. The technologies of the Fourth Industrial Revolution might even help us better prepare for natural disasters and potentially also undo some of the damage wrought by previous industrial revolutions.
There might be increased social tensions as a result of the socioeconomic changes brought by the Fourth Industrial Revolution that could create a job market that’s segregated into “low-skill/low-pay” and “high-skill/high-pay” segments.
We need to develop leaders with the skills to manage organizations through these dramatic shifts.
Blockchain: Recommendations for the Information Profession
Monday, September 24, 2018 12:00 pm
Central Daylight Time (Chicago, GMT-05:00)
Blockchain technology is being discussed widely, but without clear directions for library applications. The Blockchain National Forum, funded by IMLS and held at San Jose State University’s iSchool in Summer 2018, brought together notable experts in the information professions, business, government, and urban planning to discuss the issues and develop recommendations on the future uses of blockchain technology within the information professions. In this free webinar, Drs. Sandy Hirsh and Sue Alman, co-PIs of the project, will present the recommendations made throughout the year in the Blockchain blog, Library 2.0 Conference,Blockchain Applied: Impact on the Information Profession, and the National Forum.
Q/S TO ASK: WHAT KINDS OF DATA AND RECORDS MUST BE STORED AND PRESERVES exactly the way they were created (provenance records, transcripts). what kinds of info are at risk to be altered and compromised by changing circumstances (personally identifiable data)
515 rule: BC can be hacked if attacked by a group of miners controlling more than 50% of the network
Standards Issues: BC systems- open ledger technology for managing metadata. baseline standards will impact future options. can BC make management of metadata worth. Is it worth, or more cautious.
Potential Use cases: archives and special collections where provenance and authenticity are essential for authoritative tracking. digital preservation to track distributed digital assets. BC-based currencies for international financial transactions (to avoid exchange rates ILL and publishing) . potential to improve ownership and first sale record management. credentialing: personal & academic documents (MIT already has transcripts and diplomas of students in BC – personal data management and credentialing electronically).
public libraries: house docs of temporarily displaced or immigrants. but power usage and storage usage became problems.
a city south of Denver CO is build right now, and will be build on these principles.
benefits for recordkeeping: LOCKSS (lot of copies keeps stuff safe) – Stanford U; chain of custody (SAA Glossary); Trust and Immutability (BC) vs confidentiality and performance (dbase)
Libarians role: need to understand BC (how does it work and what can it do for us; provide BC education for users; use BC in various applications
recommendations from National Forum:
ASIS&T presentation in Vancouver, Nov. 2018; MOOC on BLockchain Basics; Libary Futures Series, BOok3 Alman & Hirsh
Inclusion of 3D Artifacts into a Digital Library: Exploring Technologies and Best Practice Techniques
The IUPUI University Library Center for Digital Scholarship has been digitizing and providing access to community and cultural heritage collections since 2006. Varying formats include: audio, video, photographs, slides, negatives, and text (bound, loose). The library provides access to these collections using CONTENTdm. As 3D technologies become increasingly popular in libraries and museums, IUPUI University Library is exploring the workflows and processes as they relate to 3D artifacts. This presentation will focus on incorporating 3D technologies into an already established digital library of community and cultural heritage collections.