December 31 | 8 minute read | Change

Part 1: What is Computationalism?

By: Mike Bird

Part 1: What is Computationalism?

Computationalism derives from machine processes and automation from the early years of the Industrial Revolution, though its antecedents date back to before this.  As part of the move from a predominantly domestic to a factory system of goods manufacture, pioneers of early industrial technology developed machines that were in effect pre-programmed, or ‘hard-wired’, to repeatedly perform the processes necessary for production.  These ingenious engines automated and increased the pace of production and replaced the hands of the human artisans who had laboured in many cases for years and generations to learn their craft.  Automation in industries like transportation, iron production, and cloth manufacture stimulated mass production in other sectors, and this led to the development of new and more complex mechanical processes throughout the 19th and early 20th century.  Increasingly sophisticated devices, like mechanical pianos which could ‘read’ music fed into them from a series of punched cards, were precursors to modern computers.  Computing devices nowadays work in much the same way though their processes are more digital operations rather than physical mechanisms.

The extraordinary achievement of computational research and development is not only the technology that has made the machinery and the hardware possible but the rendering of complex activities into programmable code, or software, which the machines read and follow. This coding information must specify everything, including all possible contingencies, and allow for no ambiguity or vagueness for the machine on its own, in the absence of any code at all, is not capable of making judgements or interpretations of ambiguous information. This places an enormous responsibility on to the programmers themselves. They have to identify relevant algorithms which break down skilled activities that had required humans to exercise intelligence or careful judgement and then render them into unambiguous and programmable code. Perhaps one day we will be able to render every and all human endeavours into specifiable rules and coding information. The utopian dream of AI and robotics is that this would mean liberating humans from the tiresome obligations of their lives so that they could spend their time in leisure. Such predictions were always naïve but there is still an enormous way to go before anything nearing this is attainable. One clear demonstration of this is the failure of the algorithms used currently on social media to consistently and accurately identify fake information circulating on their platforms. This is becoming an increasingly stubborn problem. Irrespective of whether this is because of the commercial incentives powering social media or whether this is a technological limit to capability, it highlights the critical importance of the ethical acceptability of the direction computationalism may take us. It should also remind educators wishing to prepare students to access such content of the central importance of developing critical, and discerning capacities.

It is not the purpose of this post to dwell on the possible and potential ramifications of this though clearly it is a critical concern. Instead and in relation to schooling and human learning let us consider for a moment the model that computationalism offers as a metaphor of the human mind. Can we learn anything about the operations and complexity of the human mind from computational devices?

Concepts from computationalism and the organisation of industrial production are clearly recognisable in schools and have influenced thinking about human learning. Notions within popular discourse that are now very familiar to us from computing, have parallels with notions from cognitive science: long-term memory, cognitive load, and working memory. Crude parallels as they may be, and although they do not do justice to the complexity of the field of cognitive science, what is significant is the notion of learning involving knowledge or expertise existing independently and externally in some abstracted specification, being fed into a device, or encoded in a learner’s long term memory, in order to bring about learning and capability, and by implication, social, economic and cultural capital.

We can also see within the structures of schooling, the influence of industrial-era factory organisation. When mass schooling first appeared in Western countries in the late 19th and early 20th centuries, employment in industrial manufacturing was at its apogee; the great majority of children would have gone on to work in factories. Schools needed to prepare their pupils for the production lines, specifications, vertical management hierarchies, quality controls, shifts, inputs and outputs of factory work and so it made sense for easily recognisable equivalents to be present in school settings.

There are a number of other recognisable assumptions about learning following from this. The first is that knowledge, both substantive knowledge as well as procedural knowledge or ‘know-how’, has to be predetermined and fully explicit before it can be inputted and encoded in a learner’s mind. Rather like computing code, ambiguity or uncertainty is to be avoided in favour of crystal clear instruction. The second recognises that any demonstration of knowledge or capability can only come after this internalisation process is complete. In other words knowledge acquisition must precede capability in the same way that we would not expect a computer to be able to perform processes if it had not been programmed to do so. We can even see this assumption at work in the fact that we seem to privilege the educational needs of young children over the educational needs of adults. Young children need to know and understand how the world really works before they can ever hope to perform in it – is perhaps the presumption involved here. For a rich treatment of this idea and a far deeper exploration of it than could ever be included here, Eric Bredo’s (1999) paper is a must read, particular the sections under ‘cognition as symbol processing’.

Part 1:

See below for posted comments.  Do you have something to say? 

 

Bredo, E. (1999). Reconstructing Educational Psychology. In P. Murphy (Ed.), Learners, Learning and Assessment (pp. 23-45). London: Sage.
Bruner, J. (1997). The Culture of Education. Boston: Harvard.
Casey, G., & Moran, A. (2012). The Computational Metaphor and Cognitive Psychology. Irish Journal of Psychology, 10(2), 143-161. doi:10.1080/03033910.1989.10557739
Hammersley, M. (2006). What’s Wrong with Ethnography? New York: Routledge.