Photo: Dr. Muhammad Atique – Shaping The AI Conversation For A More Thoughtful Digital Future
Navigating The Algorithmic Age With Thoughtful Insights
Dr. Muhammad Atique explores AI’s impact on society, human behaviour, and digital culture, offering practical strategies and ethical perspectives to balance technology and real-life connections in an algorithm-driven world.
Dr. Muhammad Atique stands as one of the most compelling voices in the modern discourse on artificial intelligence, digital media, and the evolving impact of technology on society. An academic and Fellow of Advance HE (UK), he brings unparalleled insight into the intersections of technology, culture, and human behaviour. With a remarkable ability to dissect complex ideas and present them with clarity, his work serves as a guiding light in a world increasingly shaped by algorithms and AI.
His latest book, Algorithmic Saga: Understanding Media, Culture, and Transformation in the AI Age, exemplifies his intellectual brilliance. This thought-provoking work delves into the profound technological shifts of our time, offering readers both guidance and critical reflection in navigating the algorithm-driven world. From addressing ethical concerns surrounding AI to exploring the psychological nuances of “delusionships,” Dr. Atique tackles some of the most urgent concerns of our age with empathy and rigour. The book is as much a roadmap as it is a wake-up call, encouraging individuals to reclaim balance in their digital lives while wrestling with the broader questions of human values and technology.
It is a privilege to feature this enlightening conversation with Dr. Muhammad Atique on Reader’s House. His insights are not only timely but essential, challenging us all to reflect on the deeper implications of our relationship with technology and the ever-evolving digital landscape.
Dr. Atique is a visionary thinker transforming complex AI, media, and cultural challenges into accessible wisdom for everyday readers.
What inspired you to write Algorithmic Saga: Understanding Media, Culture, and Transformation in the AI Age?
Technology is advancing at an unprecedented pace, leaving people simultaneously excited and apprehensive about the future. Questions arise—will machines replace humans entirely? Will writers be reduced to mere oversight roles? This uncertainty inspired me, driven by a growing realization that something fundamental has shifted in how we live, think, and connect with one another. I noticed that technology was no longer just supporting our lives but quietly shaping our values, emotions, and decisions. For example, I also began noticing how people would shape their opinions on public issues based entirely on what appeared in their social media feeds, often without encountering opposing viewpoints. As someone working across media, communication, and public policy, I felt a responsibility to step back and ask what this transformation really means for ordinary people. This book grew out of that urgency, to make sense of an algorithm-driven world in human terms by creating a balance between the use of technology and real life.
How do you see algorithms shaping human behaviour and societal values in the future?
Algorithms are already influencing what we pay attention to, what we believe, and even how we feel. In the future, I see them becoming even more embedded in everyday life, shaping norms around success, relationships, productivity, and identity. The danger is not domination by machines, but quiet dependence on technology. When values prioritize screen engagement over meaning, society risks losing depth, empathy, and critical thinking—urging us to consciously intervene and reflect.
Can you elaborate on the concept of “delusionships” mentioned in your book?
Delusionships refer to emotionally intense connections that feel authentic but lack mutual depth or a shared reality. These relationships often emerge through digital platforms, parasocial bonds, or algorithm-driven intimacy. A common example is when individuals feel deeply connected to influencers or online personalities who do not even know they exist. What interested me was how technology amplifies emotional attachment without responsibility or reciprocity. I am not arguing that digital connections are false, but that they can become psychologically unbalanced if we confuse visibility, validation, or constant interaction with genuine human connection.
How do you address ethical concerns surrounding Artificial Intelligence in your book?
I approach ethics as a lived issue, not just a technical one. Instead of abstract principles, I focus on how AI affects dignity, autonomy, privacy, and fairness in everyday life. For example, automated hiring systems may silently exclude qualified candidates based on biased data without anyone questioning the outcome. I ask who benefits from algorithmic decisions and who is made invisible by them. The book encourages readers to see ethics as an ongoing social conversation, where responsibility belongs not only to engineers but also to institutions, policymakers, and users.
What role do you think governments should play in regulating AI and algorithms?
Governments must act as guardians of the public interest, not passive observers of technological change. Regulation should focus on transparency, accountability, and human rights rather than innovation for its own sake. Some governments are creating specific rules to prevent children under sixteen from using AI to complete their assignments. I think this is a very positive step to protect and strengthen children’s cognitive problem-solving abilities. I believe governments should ensure that algorithms used in shortlisting CVs, public services, media, and healthcare are explainable and fair. A real-world example is the growing concern over AI tools being used in welfare systems by different governments to approve or deny benefits without clear explanations. One more recent example is that a US artificial intelligence company, Anthropic, has agreed to pay $1.5bn to settle a class-action lawsuit by book authors who say the company took pirated copies of their works to train its chatbot. At the same time, regulation must remain flexible enough to evolve alongside technology.
Can you share any insights into the impact of AI on media and digital culture that surprised you during your research?
What struck me most was how profoundly algorithmic logic has transformed creativity itself. Media culture increasingly rewards speed, outrage, and replication rather than reflection or originality. I was also struck by how creators often alter their opinions or tone simply to remain visible in algorithm-driven feeds and to fit platform expectations, often without realizing it. Culture is no longer just expressed online; it is actively engineered there.
What practical strategies do you recommend for readers to maintain balance in the AI-centric digital era?
I advocate for small, intentional practices over extreme digital detachment. I would suggest digital fasting, which means setting boundaries around screen time, and curating information sources to restore focus and emotional balance. For example, turning off non-essential notifications can dramatically improve concentration and reduce modern-day anxiety related to FOMO within a few days. When talking with chatbots, always give priority to your own critical thinking to solve the problem first, and then take help from chatbots. Do not blindly copy and paste their recommendations or answers. Also remember that while reading online, our minds mostly scan information, so we retain it only for a few hours, whereas reading from a paper book engages the mind differently and improves memory. More importantly, I encourage readers to reflect on why they engage with technology and what they actually want from it. As we know, awareness is the first step toward control.
How do you foresee the evolution of Artificial General Intelligence (AGI), as discussed in your book?
AGI is often framed as a distant or dramatic breakthrough, but I believe its influence will emerge gradually through increasingly autonomous systems. The real challenge will not be intelligence itself but alignment with human values. If society does not engage seriously with questions of purpose, responsibility, and ethics now, AGI could deepen existing inequalities and power imbalances rather than solve them.
What challenges did you face while blending technology, culture, and societal issues in a single narrative?
The biggest challenge was avoiding two extremes, being overly technical or overly abstract. For instance, translating complex algorithmic concepts into everyday experiences like dating apps or news feeds took careful rewriting. I wanted the book to speak to general readers without oversimplifying complex ideas. Finding real-life examples that connected theory to lived experience required constant revision. The goal was coherence, not breadth for its own sake.
Which audience do you hope to reach the most with this book, and why?
I especially hope to reach readers who feel overwhelmed by digital life but lack the language to explain why. These include students, professionals, educators, and everyday users who sense something is off but are unsure how to respond, or whether the problem lies with them or with the technology itself. The book is meant to empower them with understanding rather than fear.
How do you think people’s understanding of media and communication will change within the next decade?
I believe people will become more critical of how information reaches them. Media literacy will shift from an optional skill to a survival tool. For example, future audiences may routinely question whether a video, voice recording, or news story is authentic before trusting or sharing it. As deepfakes, synthetic media, and algorithmic persuasion increase, understanding media systems will be as important as understanding language itself.
What advice would you give to aspiring authors looking to write about complex topics like AI and digital transformation?
I would advise starting with curiosity, not conclusions. Write for humans first, not experts. If you cannot explain an idea clearly, you probably do not understand it deeply enough yet. I always recommend beginning with everyday observations, such as how people use their phones, before moving into theory and conclusions. Most importantly, stay grounded in real experiences because technology only matters insofar as it affects human lives.

