԰AV

Skip to main content
''


Humans create and share vast amounts of data which can be used to manipulate every aspect of life—for good and not-so-good motives. Data and their associated algorithms can serve governments, corporate entities, criminal networks, conspiracy theorists and social media mob mentality.

Rapidly emerging digital technologies develop faster than the structures to govern them, threatening the very fabric of society, spreading misinformation on a massive scale, undermining democracy, peace, privacy and well-being.

԰AV’s (԰AV) Michael Filimowicz studies new and emerging technology with a critical lens. He teaches and researches human-computer interaction, sound design, media arts and experimental phenomenology—the study of human experiences—at ԰AV’s School of Interactive Arts and Technology (SIAT). He is an interdisciplinary artist, musician, sound designer and a prolific scholar, creator and author.

One of his recent projects is a 10-book series, which takes a broad view of the information age. This series aims to stay abreast of new controversies and social issues that come with the development of new technologies. The first five volumes, , , , and , were published last year and five more books are forthcoming in May 2023. (Access the series , ԰AV computing ID required).

 All of the books explore a vital aspect of the information age—from studying the code that creates systemic bias to discussing regimes that limit access to information. The third book, began to trend internationally last year as the nationwide. In a post-COVID world, the control and influence of communications technology continues to pose risks to democracy, threatens freedom of thought and creativity, and contributes to the rise of conspiracy theories.

We spoke to Michael Filimowicz about Digital Totalitarianism, and the Algorithms and Society book series.   

Can you explain how algorithms work? How can they be manipulated to serve regimes?

The algorithmic architectures and infrastructures addressed by the series generally work automatically, obscurely and ubiquitously. They are everywhere, doing lots of things to a great extent without us knowing much about them. Algorithms are often referred to as ‘black boxes’ for this reason and organizations and regimes often—if not typically—prefer a certain amount of obfuscation around them, so that there is asymmetry of power and knowledge between citizens and companies or governments, which the series aims to make an intervention in.

Chapter 4 of Digital Totalitarianism explains that conspiracy theories are not a new phenomenon, but have taken on “new algorithm-driven potency as part of general information disorder.” Why did conspiracy theories take hold during the pandemic? Can anything be done to curb the spread of misinformation?  

Conspiracy theories provide simple narratives as an alternative to complex realities—it can be easy for many to experience information overload from the many competing media sources available. To understand the science behind vaccines—for example—requires a certain understanding of empirical methods and how experiments work etc. which is something that does not always take hold in one’s education. It would seem that many prefer to believe in science fiction—such as Bill Gates trying to sneak microchips into our bodies via vaccines—to scientific methods, and the pandemic introduced new stressors—fear, anxiety, uncertainty—that prompted many to seek belief in alternative realities typically through social media channels.

Computational regimes can be harmful on purpose or by design—by limiting access to information, for example, or by the very programming bias reflecting affluent western values. How does algorithmic bias work?

It depends on the context. For example, in generative artificial intelligence (AI) art, bias is often a result of the training data. AI image-making tools, in my experience, mostly generate images of white people, and getting diversity into the image output will often require writing specific prompts to get systems like DALL E 2 or Stable Diffusion to generate non-white people—which they certainly can do, they just usually won’t do this by default without explicit instructions to include non-white people. Another common way bias enters into the situation is that social and cultural contexts and effects are not taken into account. Bias can also increase over time through automated feedback loops, since system output can be fed back as an input that reinforces bias.

As a scholar of digital technologies over the past several decades, what has surprised you most about the information age?

There seems to be an increase in literalistic thinking as people become less able to hold multiple and conflicting meanings in mind, at the same time. This is something literature is really great at doing—forcing us to consider life and the world as rich in contradictory ambivalences, with meanings operating at different levels. The information age has a tilt towards eliminating ambivalences, contradictions and layering of meaning, generally pushing us towards over-simplified interpretations. The socio-technical forces of the information age want us to be very good at clicking on virtual buttons to buy things, or like and share content, or read and write in very brief disconnected spurts only within the tiny text boxes in our user interfaces, rather than thinking deeply or acting reflectively—though there is nothing inherent about information technology that prevents critical or subtle thinking.

For more: Visit Michael Filimowicz’ personal website .

԰AV's Scholarly Impact of the Week series does not reflect the opinions or viewpoints of the university, but those of the scholars. The timing of articles in the series is chosen weeks or months in advance, based on a published set of criteria. Any correspondence with university or world events at the time of publication is purely coincidental.

For more information, please see ԰AV's Code of Faculty Ethics and Responsibilities and the statement on academic freedom.