I was recently recommended an article by a colleague, “Information Literacy in the Age of Algorithms” (Head, et al, 2020). The article reviews survey information to ascertain how aware students are of the effect algorithms have on their research activities and socializing online. I was actually surprised to read that the students surveyed were well informed, but felt relatively helpless in changing the situation. Then I got ruminating over my own experiences. I also feel well informed about this topic. I take precautions: I have my browser clear all cookies, history, and cache upon closing the program; I don’t interact much with social media; I use alternative search engines to Google. However, I know that I am still being shown a tiny bubble designed to my tastes. I don’t think about it all of the time, but every now and then it rankles me, as it did in a previous post about Big Data Insults and Failures.
I was surprised that the article reported a high level of student awareness because I don’t quite believe they are at all times cognizant of how their online activities are feeding into search engine and social media algorithms that, in turn, are used to manipulate them into online activities that fit the algorithms. I don’t really believe any one of us is aware, all the time, of how online services have been built to manipulate our behavior. The stated purpose of these algorithms may be to show us what we want to see, and, maybe, to sell us items we want to buy. A helpful purpose; a purpose that helps the hapless searcher wade through an infinitude of search results. What the algorithms are doing, however, is giving me an echo chamber that consistently tells me my views, my experiences, my values, and my desires are right, and normal, and common among my colleagues, friends, neighbors, family, and community.
Echo chambers can be comforting, like talking out a day’s frustration with a friend who sees your point. But, in this age of aggressive partisanship, protest, and ‘fake news,’ echo chambers are dangerous. My tiny bubble, my echo chamber, does not give me truth. It does not give me objectivity or the benefit of a wider viewpoint. The algorithms that build my echo chamber bubble are only attempting to manipulate me into thoughts and actions that fit the bubble. I’m not sure if it matters who wrote the algorithms, or why I am being manipulated. What matters is that I am trapped.
You Are Trapped, Too
You reside in your own tiny echo chamber bubble no matter how carefully you go about your online activities. Your search results do not show you everything. They do not bring you truth. We each must seek truth and objectivity for ourselves. We must question the answers we find and look for dissenting opinions. Most of all, we must realize that everyone we speak and connect with is equally blinded by their own bubbles. It’s not that your neighbor stubbornly refuses to see the truth of the matter, it’s that your neighbor is incapable of seeing the same truth that you have been fed and vice versa. In this world of manipulated information, how is it possible for anyone of us to say with certainty that we are correct and that they are incorrect.
I’m not going to go all out and say believe nothing and trust no one. But I do think that once we realize how our experiences are being manipulated, it behooves us to work a little harder to verify the information we are given before we adopt it as our own truth. We have to work a little harder to give each other the benefit of the doubt.
Readings
- Head, A.J.; Fister, B.; MacMillan, M. (2020) Information Literacy in the Age of Algorithms: Student Experiences with News and Information. Project Information Literacy. https://www.projectinfolit.org/uploads/2/7/5/4/27541717/algoreport.pdf
- Merrill, J.B. (2016) “Liberal, Moderate or Conservative? See How Facebook Labels You.” New York Times. https://www.nytimes.com/2016/08/24/us/politics/facebook-ads-politics.html
2 Comments
All our perspectives are more myopic than we realize and internet algorithms do indeed narrow the focus so much further. One thing that makes me angry and at other times makes me laugh is when the algorithms get it wrong. I’m shown something that the AI thinks I’d like or want to buy and it couldn’t be further from the truth. In a perverse way, this gives me hope that human behavior isn’t always as easily predictable as expected. I hope so, anyway.
I never thought that the algorithms getting it wrong could be a hopeful situation, but now that I think of it, I agree.