It is a name for a premise that, quietly, has come to regulate all we practise and believe: that competition is the only legitimate organising principle for human activity.
we now live in Hayek’s world, as we once lived in Keynes’s.
He begins by assuming that nearly all (if not all) human activity is a form of economic calculation, and so can be assimilated to the master concepts of wealth, value, exchange, cost – and especially price. Prices are a means of allocating scarce resources efficiently, according to need and utility, as governed by supply and demand. For the price system to function efficiently, markets must be free and competitive. Ever since Smith imagined the economy as an autonomous sphere, the possibility existed that the market might not just be one piece of society, but society as a whole. Within such a society, men and women need only follow their own self-interest and compete for scarce rewards. Through competition, “it becomes possible”, as the sociologist Will Davies has written, “to discern who and what is valuable”.
Hayek built into neoliberalism the assumption that the market provides all necessary protection against the one real political danger: totalitarianism.
To prevent this, the state need only keep the market free.
This last is what makes neoliberalism “neo”. It is a crucial modification of the older belief in a free market and a minimal state, known as “classical liberalism”. In classical liberalism, merchants simply asked the state to “leave us alone” – to laissez-nous faire. Neoliberalism recognised that the state must be active in the organisation of a market economy. The conditions allowing for a free market must be won politically, and the state must be re-engineered to support the free market on an ongoing basis.
Even his conservative colleagues at the University of Chicago – the global epicentre of libertarian dissent in the 1950s – regarded Hayek as a reactionary mouthpiece, a “stock rightwing man” with a “stock rightwing sponsor”, as one put it.
Milton Friedman who helped convert governments and politicians to the power of Hayek’s Big Idea. But first he broke with two centuries of precedent and declared that economics is “in principle independent of any particular ethical position or normative judgments” and is “an ‘objective’ science, in precisely the same sense as any of the physical sciences”.
The internet is personal preference magnified by algorithm; a pseudo-public space that echoes the voice already inside our head. Rather than a space of debate in which we make our way, as a society, toward consensus, now there is a mutual-affirmation apparatus banally referred to as a “marketplace of ideas”.
“A taste is almost defined as a preference about which you do not argue,” the philosopher and economist Albert O Hirschman once wrote. “A taste about which you argue, with others or yourself, ceases ipso facto being a taste – it turns into a value.”
the introduction of Overcast, a podcast-playback app designed by the creator of the text-bookmaking app Instapaper. One of Overcast’s key selling points is a feature called Smart Speed. Smart Speed isn’t about simply playing audio content at 150 or 200 percent of the standard rate; it instead tries to remove, algorithmically, the extraneous things that can bulk up the play time of audio content: dead air, pauses between sentences, intros and outros, that kind of thing.
A similar strategy was used in 2008, Dewes said, to deanonymise a set of ratings published by Netflix to help computer scientists improve its recommendation algorithm: by comparing “anonymous” ratings of films with public profiles on IMDB, researchers were able to unmask Netflix users – including one woman, a closeted lesbian, who went on to sue Netflix for the privacy violation.
took only a few days to create the clip on a desktop computer using a generative adversarial network (GAN), a type of machine-learning algorithm.
Faith in written information is under attack in some quarters by the spread of what is loosely known as “fake news”. But images and sound recordings retain for many an inherent trustworthiness. GANs are part of a technological wave that threatens this credibility.
Amnesty International is already grappling with some of these issues. Its Citizen Evidence Lab verifies videos and images of alleged human-rights abuses. It uses Google Earth to examine background landscapes and to test whether a video or image was captured when and where it claims. It uses Wolfram Alpha, a search engine, to cross-reference historical weather conditions against those claimed in the video. Amnesty’s work mostly catches old videos that are being labelled as a new atrocity, but it will have to watch out for generated video, too. Cryptography could also help to verify that content has come from a trusted organisation. Media could be signed with a unique key that only the signing organisation—or the originating device—possesses.
Industrial revolutions are momentous events. By most reckonings, there have been only three. The first was triggered in the 1700s by the commercial steam engine and the mechanical loom. The harnessing of electricity and mass production sparked the second, around the start of the 20th century. The computer set the third in motion after World War II.
Henning Kagermann, the head of the German National Academy of Science and Engineering (Acatech), did exactly that in 2011, when he used the term Industrie 4.0 to describe a proposed government-sponsored industrial initiative.
The term Industry 4.0 refers to the combination of several major innovations in digital technology
These technologies include advanced robotics and artificial intelligence; sophisticated sensors; cloud computing; the Internet of Things; data capture and analytics; digital fabrication (including 3D printing); software-as-a-service and other new marketing models; smartphones and other mobile devices; platforms that use algorithms to direct motor vehicles (including navigation tools, ride-sharing apps, delivery and ride services, and autonomous vehicles); and the embedding of all these elements in an interoperable global value chain, shared by many companies from many countries.
Companies that embrace Industry 4.0 are beginning to track everything they produce from cradle to grave, sending out upgrades for complex products after they are sold (in the same way that software has come to be updated). These companies are learning mass customization: the ability to make products in batches of one as inexpensively as they could make a mass-produced product in the 20th century, while fully tailoring the product to the specifications of the purchaser
Three aspects of digitization form the heart of an Industry 4.0 approach.
• The full digitization of a company’s operations
• The redesign of products and services
• Closer interaction with customers
Making Industry 4.0 work requires major shifts in organizational practices and structures. These shifts include new forms of IT architecture and data management, new approaches to regulatory and tax compliance, new organizational structures, and — most importantly — a new digitally oriented culture, which must embrace data analytics as a core enterprise capability.
Klaus Schwab put it in his recent book The Fourth Industrial Revolution (World Economic Forum, 2016), “Contrary to the previous industrial revolutions, this one is evolving at an exponential rather than linear pace.… It is not only changing the ‘what’ and the ‘how’ of doing things, but also ‘who’ we are.”
This great integrating force is gaining strength at a time of political fragmentation — when many governments are considering making international trade more difficult. It may indeed become harder to move people and products across some national borders. But Industry 4.0 could overcome those barriers by enabling companies to transfer just their intellectual property, including their software, while letting each nation maintain its own manufacturing networks.
more on the Internet of Things in this IMS blog http://blog.stcloudstate.edu/ims?s=internet+of+things
a study, the “Why We Post” project, has just been published by nine anthropologists, led by Daniel Miller of University College, London. worked independently for 15 months at locations in Brazil, Britain, Chile, China (one rural and one industrial site), India, Italy, Trinidad and Tobago, and Turkey.
In rural China and Turkey social media were viewed as a distraction from education. But in industrial China and Brazil they were seen to be an educational resource. Such a divide was evident in India, too. There, high-income families regarded them with suspicion but low-income families advocated them as a supplementary source of schooling. In Britain, meanwhile, they were valued not directly as a means of education, but as a way for pupils, parents and teachers to communicate.
How would you answer if addressed by this study? How do you see social media? Do you see it differently then before?
On a recent visit in 2015, I found the social media landscape dramatically changed, again. Facebook began actively steering reading practices through changes in 2013 to the News Feed algorithm, which determines content in the site’s central feature. That year, Facebook announced an effort to prioritize “high quality content,” defined as timely, relevant, and trustworthy—and not clickbait, memes, or other viral links. This policy, along with changing practices in sharing news content generally, meant that current events can unfold on and through social media.
how much of your news do you acquire through social media? do you trust the information you acquire through social media? #FakeNews – have explored this hashtag? What is your take on fake news?
meaning management :
Anthropologists and the culturally sensitive analysts take complex bits of data and develop a higher-order sense of them. Information and meaning work at cross purposes. In managing meaning, context is everything while in managing information context is error and noise. When we give our social listening projects to information specialists, we lose an appreciation of context and with it the ability to extract the meanings that provide insight for our companies and brands.
Meaning management also involves a deeper appreciation of social listening as a component of a broader meaning-making system, rather than as, simply, a data source to be exploited.
How do you perceive meaning management? Do you see yourself being a professional with the ability to collect, analyze and interpret such data for your company?
Twitter is updating its top search results so that tweets will be ranked based on relevance instead of by time, bringing those search results in line with what users have experienced on their timeline for the past 10 months. Based on early trials, the company claims there has been more engagement in search results and tweets, with more time spent using the service.
In February, Twitter announced that it was shaking up the user timeline for everyone. Previously, it had offered the algorithmic change as an opt-in program, but it became mandatory a month later. The effort was intended to make the service more appealing to new and casual users,
Twitter’s search was broken into various categories, such as most popular or the latest (a live stream), and segmented by people, photos, videos, and more. Now it appears that there’s one more signal being used to algorithmically control how at least the top tweets are shown to you.
more about Twitter in this IMS blog
Alice and Bob have figured out a way to have a conversation without Eve being able to overhear, no matter how hard she tries.
They’re artificial intelligence algorithms created by Google engineers, and their ability to create an encryption protocol that Eve (also an AI algorithm) can’t hack is being hailed as an important advance in machine learning and cryptography.
Martin Abadi and David G. Andersen, explained in a paper published this week that their experiment is intended to find out if neural networks—the building blocks of AI—can learn to communicate secretly.
As the Abadi and Anderson wrote, “instead of training each of Alice and Bob separately to implement some known cryptosystem, we train Alice and Bob jointly to communicate successfully and to defeat Eve without a pre-specified notion of what cryptosystem they may discover for this purpose.”
same in German
Googles AI entwickelt eigenständig Verschlüsselung
Google-Forscher Martin Abadi und David G. Andersen des Deep-Learning-Projekts “Google Brain” eine neue Verschlüsselungsmethode entwickelt beziehungsweise entwickeln lassen. Die Forscher haben verschiedene neurale Netze damit beauftragt, eine abhörsichere Kommunikation aufzustellen.
W3Schools – Fantastic set of interactive tutorials for learning different languages. Their SQL tutorial is second to none. You’ll learn how to manipulate data in MySQL, SQL Server, Access, Oracle, Sybase, DB2 and other database systems.
Treasure Data – The best way to learn is to work towards a goal. That’s what this helpful blog series is all about. You’ll learn SQL from scratch by following along with a simple, but common, data analysis scenario.
10 Queries – This course is recommended for the intermediate SQL-er who wants to brush up on his/her skills. It’s a series of 10 challenges coupled with forums and external videos to help you improve your SQL knowledge and understanding of the underlying principles.
TryR – Created by Code School, this interactive online tutorial system is designed to step you through R for statistics and data modeling. As you work through their seven modules, you’ll earn badges to track your progress helping you to stay on track.
Leada – If you’re a complete R novice, try Lead’s introduction to R. In their 1 hour 30 min course, they’ll cover installation, basic usage, common functions, data structures, and data types. They’ll even set you up with your own development environment in RStudio.
Advanced R – Once you’ve mastered the basics of R, bookmark this page. It’s a fantastically comprehensive style guide to using R. We should all strive to write beautiful code, and this resource (based on Google’s R style guide) is your key to that ideal.
Swirl – Learn R in R – a radical idea certainly. But that’s exactly what Swirl does. They’ll interactively teach you how to program in R and do some basic data science at your own pace. Right in the R console.
Python for beginners – The Python website actually has a pretty comprehensive and easy-to-follow set of tutorials. You can learn everything from installation to complex analyzes. It also gives you access to the Python community, who will be happy to answer your questions.
PythonSpot – A complete list of Python tutorials to take you from zero to Python hero. There are tutorials for beginners, intermediate and advanced learners.
Read all about it: data mining books
Data Jujitsu: The Art of Turning Data into Product – This free book by DJ Patil gives you a brief introduction to the complexity of data problems and how to approach them. He gives nice, understandable examples that cover the most important thought processes of data mining. It’s a great book for beginners but still interesting to the data mining expert. Plus, it’s free!
Data Mining: Concepts and Techniques – The third (and most recent) edition will give you an understanding of the theory and practice of discovering patterns in large data sets. Each chapter is a stand-alone guide to a particular topic, making it a good resource if you’re not into reading in sequence or you want to know about a particular topic.
Mining of Massive Datasets – Based on the Stanford Computer Science course, this book is often sighted by data scientists as one of the most helpful resources around. It’s designed at the undergraduate level with no formal prerequisites. It’s the next best thing to actually going to Stanford!
Hadoop: The Definitive Guide – As a data scientist, you will undoubtedly be asked about Hadoop. So you’d better know how it works. This comprehensive guide will teach you how to build and maintain reliable, scalable, distributed systems with Apache Hadoop. Make sure you get the most recent addition to keep up with this fast-changing service.
Online learning: data mining webinars and courses
DataCamp – Learn data mining from the comfort of your home with DataCamp’s online courses. They have free courses on R, Statistics, Data Manipulation, Dynamic Reporting, Large Data Sets and much more.
Coursera – Coursera brings you all the best University courses straight to your computer. Their online classes will teach you the fundamentals of interpreting data, performing analyzes and communicating insights. They have topics for beginners and advanced learners in Data Analysis, Machine Learning, Probability and Statistics and more.
Udemy – With a range of free and pay for data mining courses, you’re sure to find something you like on Udemy no matter your level. There are 395 in the area of data mining! All their courses are uploaded by other Udemy users meaning quality can fluctuate so make sure you read the reviews.
CodeSchool – These courses are handily organized into “Paths” based on the technology you want to learn. You can do everything from build a foundation in Git to take control of a data layer in SQL. Their engaging online videos will take you step-by-step through each lesson and their challenges will let you practice what you’ve learned in a controlled environment.
Udacity – Master a new skill or programming language with Udacity’s unique series of online courses and projects. Each class is developed by a Silicon Valley tech giant, so you know what your learning will be directly applicable to the real world.
Treehouse – Learn from experts in web design, coding, business and more. The video tutorials from Treehouse will teach you the basics and their quizzes and coding challenges will ensure the information sticks. And their UI is pretty easy on the eyes.
Learn from the best: top data miners to follow
John Foreman – Chief Data Scientist at MailChimp and author of Data Smart, John is worth a follow for his witty yet poignant tweets on data science.
DJ Patil – Author and Chief Data Scientist at The White House OSTP, DJ tweets everything you’ve ever wanted to know about data in politics.
Nate Silver – He’s Editor-in-Chief of FiveThirtyEight, a blog that uses data to analyze news stories in Politics, Sports, and Current Events.
Andrew Ng – As the Chief Data Scientist at Baidu, Andrew is responsible for some of the most groundbreaking developments in Machine Learning and Data Science.
Bernard Marr – He might know pretty much everything there is to know about Big Data.
Gregory Piatetsky – He’s the author of popular data science blog KDNuggets, the leading newsletter on data mining and knowledge discovery.
Christian Rudder – As the Co-founder of OKCupid, Christian has access to one of the most unique datasets on the planet and he uses it to give fascinating insight into human nature, love, and relationships
Dean Abbott – He’s contributed to a number of data blogs and authored his own book on Applied Predictive Analytics. At the moment, Dean is Chief Data Scientist at SmarterHQ.
Practice what you’ve learned: data mining competitions
Kaggle – This is the ultimate data mining competition. The world’s biggest corporations offer big prizes for solving their toughest data problems.
Stack Overflow – The best way to learn is to teach. Stackoverflow offers the perfect forum for you to prove your data mining know-how by answering fellow enthusiast’s questions.
TunedIT – With a live leaderboard and interactive participation, TunedIT offers a great platform to flex your data mining muscles.
DrivenData – You can find a number of nonprofit data mining challenges on DataDriven. All of your mining efforts will go towards a good cause.
Quora – Another great site to answer questions on just about everything. There are plenty of curious data lovers on there asking for help with data mining and data science.
Meet your fellow data miner: social networks, groups and meetups
Facebook – As with many social media platforms, Facebook is a great place to meet and interact with people who have similar interests. There are a number of very active data mining groups you can join.
LinkedIn – If you’re looking for data mining experts in a particular field, look no further than LinkedIn. There are hundreds of data mining groups ranging from the generic to the hyper-specific. In short, there’s sure to be something for everyone.
Meetup – Want to meet your fellow data miners in person? Attend a meetup! Just search for data mining in your city and you’re sure to find an awesome group near you.
Yochai Benklerexplains: “The various formats of the networked public sphere provide anyone with an outlet to speak, to inquire, to investigate, without need to access the resources of a major media organization.”
Democratic bodies are typically elected in periods of three to five years, yet citizen opinions seem to fluctuate daily and sometimes these mood swings grow to enormous proportions. When thousands of people all start tweeting about the same subject on the same day, you know that something is up. With so much dynamic and salient political diversity in the electorate, how can policy-makers ever reach a consensus that could satisfy everyone?
At the same time, it would be a grave mistake to discount the voices of the internet as something that has no connection to real political situations.
What happened in the UK was not only a political disaster, but also a vivid example of what happens when you combine the uncontrollable power of the internet with a lingering visceral feeling that ordinary people have lost control of the politics that shape their lives.
Polarization as a driver of populism
People who have long entertained right-wing populist ideas, but were never confident enough to voice them openly, are now in a position to connect to like-minded others online and use the internet as a megaphone for their opinions.
The resulting echo chambers tend to amplify and reinforce our existing opinions, which is dysfunctional for a healthy democratic discourse. And while social media platforms like Facebook and Twitter generally have the power to expose us to politically diverse opinions, research suggests that the filter bubbles they sometimes create are, in fact, exacerbated by the platforms’ personalization algorithms, which are based on our social networks and our previously expressed ideas. This means that instead of creating an ideal type of a digitally mediated “public agora”, which would allow citizens to voice their concerns and share their hopes, the internet has actually increased conflict and ideological segregation between opposing views, granting a disproportionate amount of clout to the most extreme opinions.
The disintegration of the general will
In political philosophy, the very idea of democracy is based on the principal of the general will, which was proposed by Jean-Jacques Rousseau in the 18th century. Rousseau envisioned that a society needs to be governed by a democratic body that acts according to the imperative will of the people as a whole.
There can be no doubt that a new form of digitally mediated politics is a crucial component of the Fourth Industrial Revolution: the internet is already used for bottom-up agenda-setting, empowering citizens to speak up in a networked public sphere, and pushing the boundaries of the size, sophistication and scope of collective action. In particular, social media has changed the nature of political campaigning and will continue to play an important role in future elections and political campaigns around the world.
more on the impact of technology on democracy in this IMS blog: