AI as a human artefact: a powerful vision seeking focus

Review by Ivan Uemlianin

David Eliot’s Artificially Intelligent attempts an ambitious synthesis of AI history, technical explanation, surveillance critique, and political economy.  His approach is anchored by a humanistic conviction that artificial intelligence is fundamentally a story about people, institutions, and choices.

The book’s genuine insight emerges in its insistence on AI as socially constructed technology.  Eliot opens not with neural networks or training data but with Muḥammad ibn Mūsā al-Khwārizmī, the ninth-century mathematician whose name gave us “algorithm,” situating computational thinking within specific historical, cultural, and institutional contexts.  He moves through Ada Lovelace’s mathematical education and constrained opportunities, Alan Turing’s wartime desperation and post-war persecution, the Cold War funding structures that shaped early AI research, and the contemporary corporate imperatives driving deployment decisions.  Throughout, the argument is insistent: AI systems encode the values, constraints, and power relations of the people and organizations that produce them.

This humanistic perspective surfaces repeatedly in Eliot’s prose.  Eliot wants readers to understand that “the decisions about AI are not being made democratically,” that “we build the systems that they are introduced into,” that even in “a world dominated by machines, humans, and our decisions, are at the centre.” These convictions are earnest and important.  They position AI not as autonomous technological force but as the accumulated result of specific choices made by specific people operating within specific institutional constraints.

The author seems to want to go beyond this.  The author’s humanistic conviction represents the book’s real contribution, but it remains underdeveloped, buried beneath a sprawl of competing themes and agendas that prevents any single thread from achieving the depth it deserves.  The phrase “this book” appears twenty-five times ‒ an unusual frequency in so short a text ‒ that suggests both the author’s intense investment in the project and an unresolved tension about whether the book is delivering on its aims or even stating them fully.

What the book lacks is a theoretical framework to harness the obvious motivation.  Eliot gestures toward a social constructivist analysis but never develops the conceptual apparatus to sustain it.  The biographical sketches remain illustrative rather than analytical; the institutional critiques lack systematic explanation of how organizational forms produce technical outcomes.

Conway’s Law ‒ the principle that systems reflect the communication structures of the organizations that design them ‒ has become canonical wisdom in tech culture.  Originally formulated by programmer Melvin Conway in 1968, it offers a concrete mechanism linking organizational form to technical outcome. For Eliot’s purposes, it could root sociological critique in practitioners’ own understanding of how their work is constrained, while bridging to broader sociotechnical systems theory.

Activity Theory ‒ rooted in Lev Vygotsky’s cultural-historical psychology and its Marxist emphasis on how human consciousness is shaped by material practice ‒ analyses how human activity is mediated by tools and structured by social relations.  Developed today by scholars like Yrjo Engeström, Annalisa Sannino, and Christian Fuchs, it provides frameworks for understanding how technologies embody the contradictions and power relations of the systems that produce them.  As with Eliot’s work, in such a critique, Artificial Intelligence would be treated not as autonomous artifact but as crystallization of organizational practice.

Such frameworks would transform scattered observations into coherent argument: showing not just that al-Khwārizmī’s inheritance algorithm reflects Islamic legal structures, or that Google’s systems reflect surveillance capitalism’s imperatives, but explaining the mechanisms by which historical and institutional contexts determine technical design.  The book intuits that AI is deeply human; it hasn’t yet developed the theoretical tools to demonstrate how this works, why it matters, and what can be done about it.

Unfortunately, the author’s promising humanistic attitude is diffused across so many stories competing for the reader’s attention.  Artificially Intelligent attempts to be simultaneously: popular history of computation (from ninth-century Baghdad to 1950s neural networks), technical primer (explaining perceptrons, symbolic AI, deep learning), surveillance state critique (featuring the East German Stasi and contemporary biometric tracking), labour economics analysis (automation, displacement, retraining), consumer technology survey (Apple Vision Pro, Meta’s smart glasses), and political manifesto (championing the Luddites, calling for democratic governance).  The result is a fragmented narrative with flattened explanations.

The over-generous inclusion is a kind of failure to decide what is not relevant.  The biographical material on Lovelace and Turing perhaps.  The institutional analysis of how Google and DeepMind’s structures shape their research priorities probably.  Potted, familiar stories of the early days of the internet, the World-Wide Web, Apple with, without, and with Steve Jobs, possibly not.  Deciding what to exclude is as important, in clarifying a theme and an argument, as deciding what to include.

This kind of boundary problem is common to much recent popular AI writing.  Books like Kate Crawford’s Atlas of AI, Brian Christian’s The Alignment Problem, and Dennis Yi Tenen’s Literary Theory for Robots similarly attempt comprehensive coverage of technical, historical, ethical, and political dimensions.  But established scholars can leverage institutional authority and prior work to sustain such breadth.  A first book by a young scholar ‒ Eliot is a PhD student in criminology ‒ requires sharper focus to establish a distinctive contribution.  The impulse to demonstrate mastery across the entire field, to prove command of both ninth-century mathematics and contemporary consumer electronics, is understandable but counterproductive.

The author’s criminology background explains the emphasis on surveillance and state power ‒ core concerns of the discipline ‒ but these sections feel like competent summaries rather than original analysis.  The real originality lies in the humanistic thread, which doesn’t require disciplinary credentials to develop, only theoretical clarity and analytical focus.

Artificially Intelligent is an unfocused book, not a failed one.  The distinction matters. Eliot has identified something genuinely important: that understanding AI requires understanding the institutional contexts and communication structures, the power relations and human activities that shape its development.  This insight deserves theoretical elaboration and sustained analysis.  A young scholar who can recognize that AI is fundamentally about people has the foundation for significant work.


Ivan Uemlianin is a computer programmer working in cybersecurity. His Phd (1995) was on Vygotsky and Ilyenkov and the relevance of their ideas to current (in 1995) debates in first language acquisition.

Leave a comment