Writing Portfolio


The Filter Crisis: How Algorithms Are Tearing Us Apart

The Invisible Cage

In the age of infinite information, you would think we’d be better informed, more connected, and more empathetic than ever. Instead, many of us are angrier, more suspicious, and more divided. What went wrong? The answer lies in the invisible cages built around us: algorithmic filter bubbles.

Social media algorithms, originally designed to “personalize” our experiences, have become powerful engines of division. They sort, filter, and curate our online worlds until we are trapped in digital echo chambers, seeing only what confirms our biases and rarely what challenges them.

Studies from Pew Research reveal that social media users increasingly rely on platforms like TikTok, Instagram, and Facebook for news, often unaware of how algorithms subtly limit their exposure to diverse viewpoints. As Eli Pariser warned in his TED Talk, these invisible filters do not just shape what we know; they shape who we become. In doing so, they have helped create the polarized, fragile society we now inhabit.

The Filter Bubble Concept

The concept of the “filter bubble” was introduced by Eli Pariser in his influential 2011 TED Talk, where he warned of a future where the internet would show us only what it thought we wanted to see, not what we needed to see.

Over a decade later, that prediction has materialized in full force. Algorithms now dominate how billions of people receive their information, quietly shaping online environments to match individual preferences and biases.

Research from Pew confirms that social media platforms like Facebook, Instagram, and TikTok are no longer just spaces for entertainment or connection; they have become primary news sources for millions.

However, what users often fail to realize is that these platforms do not prioritize a balanced or broad set of news stories. Instead, content is filtered based on past behaviors, clicks, and viewing times, creating a personalized but highly limited view of the world.

The idea is simple but powerful: the more an algorithm can predict what will keep you engaged, the longer you stay. This design has led to users being fed information streams that reinforce their existing opinions, while rarely introducing contrasting perspectives. According to a 2024 Pew Research report, while users believe they are encountering diverse viewpoints, many are actually experiencing a narrowing of their informational world without even realizing it.

As Pariser emphasized, the danger is not just that we become uninformed but misinformed. Over time, filter bubbles harden into ideological silos, fostering suspicion and hostility toward those who think differently.

Instead of encountering a healthy mix of ideas, people increasingly live within curated realities that seem self-evident, unchallengeable, and absolute.

Consequences from Filter Bubbles

The dangers of filter bubbles are no longer theoretical. They manifest in the real world, fueling division, spreading misinformation, and undermining trust in institutions. As social media became a primary news source for millions, it also became a breeding ground for increasingly polarized narratives.

A 2024 Pew Research report highlights that Americans are turning to platforms like TikTok, Instagram, and Facebook not just for entertainment, but for serious news consumption. The problem is that these platforms are designed to prioritize engagement, not accuracy.

According to a CNN Business report, algorithm-driven content can rapidly push users toward more extreme viewpoints with just a few interactions, creating a slippery slope into conspiracy theories and radical ideologies.

The 2016 U.S. presidential election and the Brexit referendum offered some of the first major global examples of how misinformation, amplified within filter bubbles, could influence major political outcomes.

The pattern only worsened. The BBC reported that during major political events, false stories often outperformed factual ones on social media, particularly when anger or fear was involved.

Perhaps the most extreme example came on January 6, 2021, when the United States Capitol was stormed by rioters fueled by widespread misinformation and extreme online echo chambers.

These individuals were not simply misinformed; they had lived for months within algorithmic environments that continually reinforced and radicalized their views.

Filter bubbles, left unchecked, do not just distort individual worldviews: they can erode the foundations of our society. When people can no longer agree on basic facts, civil debate becomes impossible, and division hardens into violence.

Issues with Personalization

Defenders of algorithmic curation often argue that personalization enhances the online experience. After all, who would not prefer a feed filled with content tailored to their interests rather than a chaotic stream of irrelevant information? In theory, personalization saves time, reduces cognitive overload, and makes the internet feel more intuitive and accessible.

There is truth to that argument. Platforms like TikTok, Instagram, and Facebook thrive because they deliver what users want to see with remarkable efficiency. According to a 2024 Pew Research survey, a significant portion of Americans value getting news on social media because of its convenience.

A customized feed can introduce people to niche interests, connect them with communities they might never find otherwise, and make overwhelming amounts of information manageable.

However, convenience comes at a hidden cost. What starts as a personalized experience can quickly morph into an isolated one. A study by DuckDuckGo revealed that even when logged out or browsing in private mode, users still encountered different search results based on inferred profiles, showing that filter bubbles persist beyond social media. Over time, this narrowing of information limits exposure to diverse perspectives and creates the illusion that one’s worldview is universally shared and correct.

Furthermore, while personalization may work harmlessly for product recommendations or entertainment, it becomes dangerous when applied to political information, public health guidance, or social issues.

In these arenas, exposure to a variety of viewpoints is not a luxury; it is a democratic necessity. Without it, the public discourse fractures and the shared reality needed for civil debate begins to dissolve.

Personalization is not inherently bad. However, without safeguards and transparency, it becomes a tool of division rather than connection.

Bursting the Filter Bubbles

Bursting the filter bubbles that trap online users will require deliberate action from both technology companies and individuals. While algorithms will never disappear completely, we can demand and create healthier digital environments where diverse perspectives are not the exception but the norm.

First, tech companies must take real responsibility for how their platforms shape public discourse. Transparency is essential. Users deserve to understand how their feeds are curated and to have greater control over the algorithms that filter their information.

Efforts are slowly beginning. TikTok, for instance, announced it would test content diversification features in an attempt to expose users to a broader range of material, reducing the chances of algorithmic echo chambers. Experiments like these are promising but need to become standard practice, not isolated exceptions.

Second, users must play an active role. Becoming a conscious consumer of information means deliberately following accounts with different viewpoints, cross-checking facts, and recognizing when a feed feels too comfortable.

GCF Global’s digital media literacy resources stress the importance of stepping outside one’s usual informational bubble. Without proactive effort, it is all too easy to mistake curated reality for objective truth.

Additionally, education systems must adapt to the new information landscape. Media literacy should be taught early and often, giving students the tools to recognize bias, question sources, and seek a diversity of information. In a world where algorithms are gatekeepers of knowledge, critical thinking is no longer optional; it is a civic duty.

Fighting back against filter bubbles will not happen overnight. But by demanding transparency, practicing conscious media consumption, and educating the next generation, we can begin to reclaim a healthier, more connected digital world.

Ending Thoughts

The internet was supposed to be humanity’s greatest tool for connection, learning, and understanding. Instead, the unchecked rise of algorithmic filter bubbles has contributed to one of the most divided and distrustful periods in modern history.

The information we receive is no longer shaped by what is most important, but by what is most engaging, profitable, or predictive of our behavior. Eli Pariser warned that invisible filters would quietly shape our perceptions without our consent or awareness.

Today, his warning echoes loudly. Studies show that even when people believe they are encountering diverse viewpoints online, algorithms continue to narrow their exposure behind the scenes. When left unchallenged, these bubbles do not just misinform individuals; they fracture communities and corrode democratic societies.

Bursting the filter bubble will not be easy. It requires effort, transparency, education, and a shift in how we think about information itself. But the cost of inaction is far greater. If we continue to let algorithms dictate the boundaries of our knowledge, we risk losing the shared understanding that holds societies together.

The first step toward breaking free is simple but powerful: recognizing the bubble exists. Only then can we start to rebuild a digital world grounded not in division, but in empathy, curiosity, and truth.

Works Cited

Pariser, Eli. Beware Online “Filter Bubbles.” TED, Mar. 2011, https://www.ted.com/talks/eli_pariser_beware_online_filter_bubbles.

Walker, Mason. “How Americans Get News on TikTok, X, Facebook and Instagram.” Pew Research Center, 12 June 2024, https://www.pewresearch.org/journalism/2024/06/12/how-americans-get-news-on-tiktok-x-facebook-and-instagram/.

Walker, Mason. “Many Americans Find Value in Getting News on Social Media, but Concerns about Inaccuracy Have Risen.” Pew Research Center, 7 Feb. 2024, https://www.pewresearch.org/short-reads/2024/02/07/many-americans-find-value-in-getting-news-on-social-media-but-concerns-about-inaccuracy-have-risen/.

“Why YouTube’s Algorithm Will Never Stop Recommending Extremist Content.” CNN Business, YouTube, 19 Apr. 2021, https://www.youtube.com/watch?v=doWZHFnVPQ8&ab_channel=CNNBusiness.

DuckDuckGo. “Measuring the Filter Bubble: How Google Is Influencing What You Click.” Spread Privacy, 5 Dec. 2018, https://spreadprivacy.com/google-filter-bubble-study/.

“Facebook Promises Action Over Fake News.” BBC News, 30 Nov. 2017, https://www.bbc.com/news/world-us-canada-42187596.“Digital Media Literacy: What Is an Echo Chamber?” GCFGlobal, https://edu.gcfglobal.org/en/digital-media-literacy/what-is-an-echo-chamber/1/.


Ringling College Tackling AI

“AI may not have any place in any individual’s artistic practice, but it would be a failure on our part to not educate our students about these tools.” – Rick Dakan

As artificial intelligence continues to reshape industries worldwide, Ringling College of Art & Design is positioning itself at the forefront of creative education’s evolving landscape. In May 2024, the college announced the launch of its new Artificial Intelligence Undergraduate Certificate Program, the first of its kind at an art and design school. Now, nearly a year later, the program is underway, providing students with the tools and ethical awareness needed to navigate an increasingly AI-influenced creative world.


The certificate program, which began in Fall 2024, offers a structured three-course sequence designed to give students a strong foundation in both AI technology and its societal and artistic implications. Students are required to complete one core course—Fundamentals of AI—and two electives that explore various creative and ethical applications of AI, with more being added.

Rick Dakan, AI Coordinator at Ringling College, played a leading role in developing the program. “A group of us back at the end of spring 2023, a group of faculty, were seeing the tools coming out,” Dakan explained. “We were like, this is a huge deal, and it impacts everything we do at Ringling.”


Inspired in part by the University of Florida’s AI initiative, which integrates AI across all disciplines, Ringling’s program was crafted specifically from the perspective of creative professionals. “What separates the Ringling program from other schools is that we are actually less technical,” Dakan said. “The program at University of Florida comes out of Computer Science, and ours comes from the mindset that AI is a set of tools and how they will impact art and design.”


For Dakan and other faculty members, the goal is not to mandate AI usage but to ensure students understand its potential and limitations. The final project for the required AI Fundamentals course asks students to write their own personal AI manifesto—reflecting on how, or even if, they want AI to be part of their creative process. “A perfectly valid answer to that is nowhere,” Dakan noted.

Reactions among students reflect a range of opinions about the college’s integration of AI. First-year Creative Writing major Grey expressed a cautiously neutral stance. “When I first heard Ringling was including an AI certificate, I wasn’t surprised or negative about it,” Grey said. “To me, AI seemed like a useful tool that should be understood and utilized safely in this day and age.”


While Grey occasionally uses AI tools like ChatGPT to analyze and refine their writing, they remain rooted in traditional, hands-on creative methods. “Traditional is always best to me,” Grey added. “The majority of my creative life has been shaped by a hands-on approach, and I’d be remiss to let go of that.” However, Grey also acknowledged broader concerns, particularly regarding AI’s impact on the job market and the need for clear regulations. “I do think that laws need to be firmly established with AI and soon.”


Second-year Game Art major Kho shared similar mixed feelings. Initially unsure how AI would fit into an art school setting, Kho noted that classroom discussions helped clarify its role. “I’ve seen a lot of AI artworks and heard a lot of people complaining about using AI in art industries,” Kho said. “But if people know how to use AI properly, I think it might be helpful.”


Kho believes understanding AI is essential for future career opportunities. “AI has been affecting art-related jobs a lot already. Learning how to use AI can give artists another way to continue their art career,” they said. However, Kho emphasized that using AI responsibly is key, suggesting clear guidelines be established to prevent over-reliance. “Abusing AI in the art industry will reduce artists’ creativity and make the art industry less creative.”

Since the announcement of the AI Certificate Program, one of the key questions raised within the Ringling community has been whether artificial intelligence belongs in a creative academic environment. Early on, both students and faculty expressed skepticism about how AI fits into the artistic process and whether it aligns with the college’s traditionally hands-on approach to art and design education.


Rick Dakan acknowledges those concerns, having heard similar sentiments when discussions about AI first began. “I’ve heard from many students and some faculty, less now, that AI has no place in an art school,” Dakan said. However, he notes that this is not the first time the college has faced resistance to technological change. “The same conversation occurred with Photoshop, and I don’t think anyone is thinking we shouldn’t have started teaching that.”


The college’s stance is focused on education rather than enforcement. Rather than mandating the use of AI, Ringling prioritizes giving students the tools and knowledge to make their own informed decisions. Dakan emphasizes that while AI may not fit into every student’s creative process, understanding its capabilities and limitations is crucial as the industry evolves.

Ringling’s approach comes amid increasing reports from alumni that employers are already asking about AI literacy. Faculty members also recognize that industries like animation, advertising, and game design are rapidly integrating AI tools into their workflows. Dakan suggests that within the next few years, AI will be capable of performing most digital tasks at the same level as humans, leaving students to decide how they want to stand out creatively.


Looking ahead, Ringling College remains committed to balancing the advancement of AI technology with the preservation of artistic integrity. Whether students choose to adopt AI or not, the college’s proactive approach ensures that they graduate equipped to make informed decisions.

As Dakan put it, “There is no one right answer for any major or one right answer for any student. It’s just going to depend on what they want to do and what industries they want to go into.”


Cybersecurity Article Tweets Spotlight

I. 404 Media – AI Spam and Facebook’s Future
Are we losing the ethical barriers of media due to a lack of AI literacy? Zuckerberg apparently “loves” low-effort AI spam while Meta is flooded with misleading content. Scary to think Meta envisions this AI slop as the future of social media.

🔗https://www.404media.co/zuckerberg-loves-ai-slop-image-from-spam-account-that-posts-amputated-children/

II. EFF – The Dangers of AI-Powered Face Scans
👀 AI-powered face scans claim to detect age, emotions, and even honesty. In reality, this tech is inaccurate, biased, and a serious privacy threat. Should we really trust AI with decisions about identity and security?

🔗https://www.eff.org/deeplinks/2025/01/face-scans-estimate-our-age-creepy-af-and-harmful

III. The Markup – TikTok’s Misinformation Problem
🚨 AI-driven misinformation is spreading on TikTok, misleading migrants with false legal advice that could derail asylum claims. Social media’s disinformation problem comes with dire real-world consequences—who’s accountable?

🔗https://themarkup.org/languages-of-misinformation/2024/09/26/tiktok-videos-spread-misinformation-to-new-migrant-community-in-new-york-city


Data Brokers: How Your Personal Information is Bought and Sold

I. Introduction

Imagine walking into a Costco-sized warehouse, but instead of bulk groceries, the shelves are filled with your personal data—names, addresses, browsing history, and even your shopping habits, all neatly packaged and ready for sale. This is how data brokers operate. They collect, analyze, and sell public personal information to advertisers, insurance companies, and even political groups. While this might sound like a harmless marketing tool, the reality is that online data privacy is at serious risk. This article explores how data brokers work, the dangers they pose, and what you can do to take back pieces of your digital footprint.

II. What Are Data Brokers?

Data brokers are companies that collect and sell personal data from public and private sources without consumers’ direct knowledge. Unlike platforms like Google and Facebook, which collect data through direct user interactions, data brokers scrape public personal information from voter records, online purchases, web browsing habits, and even social media interactions (Kaspersky).

Think of data brokers as digital middlemen, selling these consumer profiles in bulk, similar to how warehouse stores sell products at discounted rates. Their clients range from advertisers and financial institutions to risk assessment firms and fraudsters. The online data privacy concerns are real, as most individuals have no idea their personal details are being exchanged like commodities in a global marketplace.

III. How Data Brokers Work: Collecting and Selling Your Information

How do data brokers work? They gather personal information through:

  1. Public records – voter registration, real estate transactions, court filings.
  2. Commercial data – loyalty programs, retail purchases, online orders.
  3. Online behavior tracking – cookies, ad clicks, search history.
  4. Social media – profile details, interactions, interests.

Once compiled, these profiles are sold for various uses, from targeted ads to background checks and fraud detection. However, not all buyers have ethical intentions. A BleepingComputer report revealed how executives at Epsilon Data Management knowingly sold personal data to scammers. These fraudsters then crafted highly targeted scams, resulting in thousands of victims losing significant amounts of money. This highlights the critical need for stronger online data privacy laws and consumer awareness about how data brokers work (BleepingComputer).

IV. The Risks of Data Brokers: Identity Theft and Beyond

The sale of personal data carries major risks:

  1. Data Brokers and Identity Theft – Your personal details can be stolen and used to open fraudulent accounts.
  2. Targeted Scams – Sophisticated phishing scams use data broker profiles to make fake emails and calls more convincing.
  3. Discrimination – Companies may use this data to deny loans, insurance, or job opportunities.
  4. Lack of Transparency – Most people have no idea their data is being collected and sold.

A Newsweek report revealed that some data brokers sell information on ethnicity, financial status, and health conditions, which can be exploited by bad actors. Unlike in the European Union, where strict data privacy laws like GDPR give individuals control over their information, the U.S. lacks federal-level protections, allowing this industry to thrive unchecked.

V. How to Protect Your Online Data Privacy from Data Brokers

While you can’t completely erase yourself from data broker databases, you can limit exposure:

  1. Opt-out of major data brokers – Resources like PrivacyRights.org provide opt-out guides.
  2. Use privacy-focused tools – Try DuckDuckGo, Startpage, and Firefox for tracking-free browsing.
  3. Limit online sharing – Be cautious about posting personal details on social media.
  4. Check privacy settings – Adjust your browser and app settings to block tracking.

Data brokers profit from consumer inaction, so taking even small steps can significantly reduce your digital footprint. The less data available, the harder it is for brokers to compile and sell your information.

VI. Can You Sue a Data Broker? Legal Protections and Challenges

Many people ask, “Can you sue a data broker?” The answer is complicated. In the U.S., there are very few legal options for individuals looking to reclaim their public personal information. The California Consumer Privacy Act (CCPA) is one of the few laws that allow residents to request the deletion of their data, but most states offer no protection (CBS News).

Some legal cases have succeeded. Epsilon Data Management paid $150 million in penalties after selling consumer profiles to fraudsters. However, unless a broker is proven to be directly involved in fraud, suing them is nearly impossible. This legal gray area leaves most Americans vulnerable, reinforcing the need for stronger online data privacy laws at the federal level.

VII. Conclusion: The Fight for Stronger Online Data Privacy Protections

The data broker industry operates with little oversight, collecting and selling public personal information without consumer consent. While some companies claim this benefits marketing and fraud prevention, the risks of identity theft, scams, and data-driven discrimination far outweigh any benefits.

Understanding how data brokers work is the first step in protecting your privacy. But real change requires stronger regulations and consumer advocacy. Until then, opting out and taking control of your data is your best defense. Online privacy is a right, not a privilege—take action now.

Works Cited

BleepingComputer. “Data Firm Execs Convicted for Helping Fraudsters Target the Elderly.” BleepingComputer, 2025, https://www.bleepingcomputer.com/news/legal/data-firm-execs-convicted-for-helping-fraudsters-target-the-elderly/.

CBS News. “Former Colorado Data Company Executive Convicted of Mail and Wire Fraud, Sold Data on Millions of People.” CBS News, 2025, https://www.cbsnews.com/colorado/news/former-colorado-data-company-executive-convicted-mail-wire-fraud/.

Kaspersky. “How to Stop Data Brokers from Selling Your Personal Data.” Kaspersky, 2025, https://usa.kaspersky.com/resource-center/preemptive-safety/how-to-stop-data-brokers-from-selling-your-personal-information.Newsweek. “The Secretive World of Selling Data About You.” Newsweek, 2025, https://www.newsweek.com/secretive-world-selling-data-about-you-464789.