BP:

Schlagworte A-Z. Bitte wählen Sie einen Anfangsbuchstaben:

 

Video-Interview David Bates

Society – Technology – People

Prof. David Bates, PhD, Berkeley, analysiert die Verknüpfung zwischen Mensch und Technologie als eine unauflösbare, symbiotische Verbindung in unserer kulturellen Entwicklung. Die Verbindung sei gleichermaßen eine notwendige Grundlage für gesellschaftlichen Fortschritt. Bates sagt: "There is a danger in not evolving and there is a danger in evolving", für ihn sind "interruption, error and slippage" die grundlegenden Triebfedern technologischer Neuerungen. Der Amerikaner setzt sich intensiv mit Künstlicher Intelligenz und deren Entwicklung auseinander sowie mit Automatisierung aus kulturtheoretischer Sicht.

Textfassung des Video Länge 29:04 Min.

30.05.2018 | BIBB

Society – Technology– People: Theory-Interviews on the relationship between societal and technological change

 

Interview with Prof. David Bates, PhD

This interview was filmed in London on 30 May 2018. The interviewer was Michael Tiemann. It is part of a BIBB-research project on "Polarisierung von Tätigkeiten in der Wirtschaft 4.0 - Fachkräftequalifikationen und Fachkräftebedarf in der digitalisierten Arbeit von morgen", funded by BMBF.

More information can be found here:
Theorieinterviews

 

Where do we find sources for technological change and social division of labour?

Well that´s the biggest question and therefore the most difficult. So it will take a little bit to explain how I approach that problem. I think if you had a very strong theoretical commitment, a dogmatic one, a Marxist one say, you could easily point to certain factors that would be the key causal factors. And similarly, if one took an economic approach, a more conservative one, we would have a similar kind of causal network. But of course we know that it is very difficult to pinpoint one single causal factor. If you look at - maybe the most important technology of the last century - the digital computer, we know that it was born of military investment and required a huge amount of both capital as well as personnel during the second World War. Then we also have to admit that technologies often appear as the result of innovative thinking. So again with the computer, whether it was Alan Turing in Britain, Konrad Zuse in Germany, we would also want to point to the individual, intellectual factors as well. So soon we would get a fairly catholic position, historically to say that we would have to point to many factors that would work to explain technological innovation. And I think that´s very unsatisfying. It´s not satisfying because we lose sight of what actually is technology in itself. What is it? How does it work? And we have to think that very carefully. And it´s also dangerous because that historical kind of blend approach that points to multicausal factors also assumes a certain technical neutrality. It assumes that technology is in fact a somewhat neutral instrument employed by many different purposes and for different kinds of reasons. So I think we really have to think technology in itself without committing to a dogmatic formula, that says this particular external factor is what determines it. So I think this is why we could look back without at all committing to a Heideggerian perspective. We can look back to Heidegger´s very important essay "Question concerning technology" to see that the question he was posing was absolutely important. We have to think technology in its own terms. What is the essence of technology? And I think we also can follow him without all of his implications by pointing to the fact that technology, understood in the way that you ask this question, assumes a certain kind of thinking about causality. That something causes something else, obviously taking his cue from Aristotle. What´s most important for me, is his pinpointing the fact that a technology is a making, it´s an artificial process. It´s a human process of making. So "techne" is "poiesis", as he says. And once we think about it in that way we realize, that technology will always be something more than just its material form or its use value. It also has a purpose. This is important for at least two reasons. The first one is that – as he shows in his essay – we cannot separate a technical object from its social contexts, its religious contexts or what defines its purpose. But we also can´t separate it from its material forms either. These are all constraining dimensions. But I think what I take from Heidegger is the idea the technical object has an independent existence, because it brings all of these different factors together into one organization. And what makes it a technical object is its internal coherent technical organization. So the second implication of that is that technology as an independent organization has the capacity to develop and evolve on its own. In other words: technologies can evolve beyond their initial appearance in various systems or social contexts or political or economic contexts. What I would first want to say is that we look at the long history of technology, right from the origin of the human as a toolmaker through important revolutions in technology such as writing and the printing press and so on we can see that we cannot isolate technology from the history of the culture of the human. Not only is technology obviously entangled in the sense that sociopolitical systems require technology as an order to maintain and organise themselves. But they also have a direct and very strong influence on the way that humans think. And this is something that is often left outside of these more sociological approaches to technology, that it is not simply that a human mind uses a tool such as the pen to make records and then perhaps could use the technology in another way. It's that the human mind is formed through its technologies. So these very important revolutions in the history of technology, let´s say the invention of writing as co-instantiated with the history of the origin of the state is also the origin of a whole new way of thinking. So unlike any other being the human is the sight of multiple instantiations and multiple individuations. So a theory of creativity, and this is something that my own work is organised around, is something that must pay attention to the intrinsically artificial character of the human, its dependence on automatic systems or at least systems that have a certain kind of automaticity and to think of creativity not as a spiritual freedom or something where insights somehow are something mysterious or miraculous, but to emphasize instead concepts such as interruption, error and slippage. These are the words that I´ve been using in my own work. That what makes the human incredibly rich in potential is the ability to slip between these systems or in fact to put systems into relationships that are not predetermined by the automatic operation of the system itself. Error, the strain from the norm, opens up possibility. Interruption, which is not so much to escape but to slow down automaticity and also the slippage between different ways of thinking about our worlds. Given that our psyche is in some ways a sight for these multiple individuations or identities.  #00:07:08-6#

 

Who is driving technological change and social division of labour?

What we have to say against Heidegger is that humans do not have a way of life, a way of thinking outside of technology. So the idea that modern technology is an infringement on something more purely human, more natural as many examples show I think is a mistake philosophically but also a danger. So what I´ve been interested in lately is drawing on theorists who were writing around the same time as Heidegger (after WWII) and really taking seriously the idea of the evolution of technology. Not the evolution of social, economic systems and their technologies but the evolution of technology. So one of the thinkers that we can point to, a famous demographer as well as evolutionary theorist Alfred Lotka, writing in 1945, explains that humans evolve in two ways. One, through their genetic long slow evolution like any other animal. But they also evolve quite rapidly through the appearance of what he calls artificial technologies, artificial extensions of human capacities. And this he calls exosomatic evolution. In other words: the evolution outside the body. The best example is the digital revolution. Not just because of its technical brilliance or the amazing sublime performance of network computing but for quite a different reason. First of all there is no question that digital technologies have been more disruptive, in the sense of transforming their networks of organization more rapidly. So one thing we have to take into account is the extreme speed of the digital revolution. That speed is not just like equivalent or parallel, analogous to other technical revolutions, which also have been very disruptive - or, maybe that's a negative word - that have been profoundly transformative. Whats different about the digital revolution is that it's not simply one technology that is evolved in a particular context and maybe devoloping implications. It has spread and imbedded itself in every single layer, every single system of the human world and the natural world, we could also add. So that’s quite different than technical systems in earlier epochs, like the industrial revolution, the invention of writing, printing. Where clearly the technology had profound transformable effects in different spheres but often we could track those relationships much more directly and also there were places where they were kept at least separate. The logics were separate. So printing effected religion and maybe profoundly transformed it but religion still had a kind of separate organizational sphere. So the tool was used by religion but of course, religion also used the tools and there was this symbiotic relationship. Now whats different in the digital revolution is that its not so much that it has spread but that the digital technology has now organized all of the systems. And not just on the level of banking but even the most intimate spheres of human existence. So when I say about the system its not even just social systems but actual individual psychic systems. It's that those digital technologies are different in an alien kind of way in that they are now defining and perhaps even transforming the logic of those systems into digital systems. What does that mean? If we again pay attention to the fact that this has a profound effect on the way that we think, that technologies have a profound effect on the way that we actually think as cognitive beings, the digital is different, not only because it directly effects our minds in so many ways but it does so persistently. Unlike the pen you could argue that the computer is always with us or at least its something that we were facing in an intimate way for longer and longer periods of time. That is clearly important. But it also is in hidden ways. If you say that we individuate different systems by habituating ourselves in the world in many different ways – even if we´re not interfacing with a computer screen the systems themselves are organized by digital technologies. So every time we interface in different ways in the world, every time we perform these different habituations they are governed by the logic of the digital. And this is again happening not by humans putting things together but the digital technologies themselves are what facilitate exchange and organization. So when we say that the logic of the digital has actually "infected" different systems of society, the psyche, politics, we mean that it has become necessary to organize those spheres in relationship to the technologies. Just like the writing did in the ancient state. However it is different in that the political system is tied to the economic system, tied to the psychic system, tied to the cultural industry and so on in ways that have never happened before. And I don’t mean tied in a kind of vague way but literally connected. What does that mean for the future? It means that more and more the systems – especially since they ran automatically in ways that are not really conformable to human action – that these systems become more and more dominant in our lives. But it is not simply an external, an alien technology. They become more dominant in our lives and therefore are governing the individuation at all levels. But I think again it is underestimating the importance of digital technologies as facilitators of exchange, network and communication. And this is something early theorists of new media were interested in. That code is, in some ways, more openly neutral than any other medium because it can translate all the mediums. Friedrich Kittler said the same thing in a very prophetic way back in the 80s in his book "Grammophone, Film, Typewriter", which is that the modern medium is different because it is capable of essentially translating any medium. So the logic of the organization and flow of information in the digital technology will encompass all of the media. But also now, as we see, it encompass all the different ways we organize ourselves in human society. #00:14:15-2#

 

Which consequences will arise from technological change?

One of the ways we often think about this problem is in the replacement of human beings. So not to diminish the importance of this as a reality but conceptually the idea that somehow certain jobs will be taken over by first of all industrial robots, smarter robots, more flexible robots, which is a lot of the work that´s being done now, even at Berkley, human friendly robots, for example, that will learn from humans – and then replace them. Learn as in not simply repeating the motions but actually learn all the things that humans now do on factory assembly lines because of their dexterity. Now that’s one of the new important objects of robotics. Similarly if we take one of the most profound revolutions in the last half-century – the emerge of artificial intelligence – we know that certain things can be done by machines. So that we start to hear about certain forms of work also being replaced by AI, in the realm of research for sure but also in the legal field. So the future of work can’t be disentangled from the future of politics and culture and the future of technology. So the evolution of the digital can accelerate and intensify some of the negative consequences, or is intensifying many of those negative consequences. And from the broadest perspective, I would say, the negative consequences, one that does look back to Heidegger, is to say that often the digital is so good at transforming other systems into ones that are calculable and predictable. The ways that computers are capable now of organizing systems and making them coherent, integral and predictable - automatic, we might say - is very sophisticated and often doesn't look like the old mechanisms of the past. Not only are they often invisible but often they are also very flexible. So the profound achievements of machine learning and deep learning, new forms of statistical modeling, that includes both in neurosciences and fields like sociology but also in the fields of economic and political organisation are very subtle and flexible and adaptive but they are still about calculation and prediction. So just the idea of disruption, the idea of novelty, even in the case where we can demonstrate that a technology is an evolution for the best in terms of the technology (a faster computer or a more efficient car) that does not mean that without very careful studying that these things will necessarily be good for the social systems that depend on them. And I think the best example of that is the car. The introduction of the automobile and its infrastructure that then becomes a kind of metastasation of a social system that now serves the vehicle rather than is stabilized by it. So that would be my main concern not just that not following the implications but recognizing we don’t know the implications. Of course, there is also the case that social and other systems are being destabilized by technology that no longer works. This is equally a difficulty. If I was looking for a frame for a solution to that problem it would be one that recognizes that technologies play fundamental roles in many different situations that we are not always aware of. And yet, at the same time, their roles can change. Technology is not neutral and it also has no determined value on its own. So that means we have to be very careful first of all about the idea that technology evolving is necessarily good in the sense of biological evolution that proves in some way that something is being tested. Because, in fact, in a modern society those technologies are not being tested in a kind of neutral field but are highly entangled in many different places. There is a danger in not evolving and there is a danger in evolving. Which sounds unsatisfying but I think the first step would be to say: What is it that we are trying to produce at the university or in design-schools or in technology firms? And the first thing would be to say: Recognise that technical evolution is something that has an orientation at teleology but some of the entanglements are quite hidden. #00:19:10-9#

 

How are drivers and consequences of technological change connected?

So this is a way of saying, "Oh well, the human is a special kind of being that requires what we might call programming". Requires a certain determination in order for it to become what it is. As a thinking subject, as a social subject, an ethical subject, and so on. For me it is a much more theoretical foundation which is that the brain is an open system. That means it is genetically determined as a web of synaptic possibility but it only becomes what it is through experiencing the world. So this is where I take my cue from thinkers such as André Leroi-Gourhan, a French paleoanthropologist and thinkers like Bernard Stiegler, who has taken up Leroi-Gourhan's work to argue that what makes a human a human subject is the possibility of exteriorization of thinking and the interiorzation of thinking – and in fact those two are the same thing. In other words, we can put outside of ourselves of thought and store it in organized forms. That’s what animals are not capable of doing. Which is to say we can also interiorize organizations of thinking. And by that it allows us to develop as an individual not simply via the plasticity of our brain but also by what we might call an alien logic within our brain. Literally we think other peoples thoughts. This is what learning is in a cultural or human sense. It is not simply observing and imitating, it is actually taking in an alien system of thought. And this, I think, is especially important for the field of AI and its projection into the next decade. And that is the incredible scope and scale of what we now call deep learning, machine learning, the possibility of not simply collating and organising data but actually intelligently discovering new forms of knowledge, new concepts, whole new structures, that are not only ones that the computer would help us find but that actually they are finding things that humans cannot even conceive. Its not possible for us even to actually understand them. This is profoundly new, that we have inhuman knowledge. No longer are we simply artificially intelligent but we have sort of machinic processes now that can produce forms of intelligence that are not capable of being interiorized by humans. Which is to say that the logic of our systems, which have always been somewhat opaque to us as social individuals, or even if we think in religious terms the impossibility of knowing really the "outside". This is something that we know now to be the case. We know that we are capable of producing these systems that create knowledge that is not human. But we also rely on that knowledge and this is I think what’s quite new and quite dangerous and people are beginning to see the dangers of that which is: What does it mean to rely on knowledge that we ourselves cannot in fact understand? That it's not simply black boxes in the sense of "I don’t understand how my iPhone works". This is quite new. These systems of machine learning are all predicated on prediction. The idea of uncovering these structures is in order to better predict the performance of the world. Whether its humans, machines or the natural world. Modeling is a way of extracting a certain kind of structure and then predicting. But what’s profoundly insidious about many of these systems that they´re also at once predictive but also prescriptive. Not only do they predict the behaviours but they will make this predictions come true. By themselves and automatically. So this is not a matter of applying a technology anymore. This is really that the systems themselves are now designed and constructed to not only predict but make prescriptive actions to create in some ways the prophecy that it has made. Now the important point here is that in this new era of the digital the more connected people are through this sort of pseudo-collective the more possibilities of prediction and prescription are going to be introduced. And that’s not simply to mean this kind of alien instrument that will coerce us, a kind of delusion control society, but we will – as individuals – become more amenable to these kinds of prescriptions. We will lose the capacity to provide these translations and slippages between different spheres. And that’s really what I would like to reduce my conceptual framework to if I had to, which is to say the real danger of the disruptive technology in this silicon valley term of the idea that the introduction of certain digital capacities will improve all these operations - the danger is that it corrupts the logic of them, creates more unity, more homogeneity but not at the collective level of shared human experience but through solicitations and organizations derived from the technology itself. So this would be to say a new disruptive technology would be training the mind to operate in different ways other than the tool itself. And that’s a profound challenge, but I think it’s going to be essential for the future of education and the future of work in the next decade.  #00:24:51-5#

 

What measures can be taken to steer technological change?

So it is very difficult for us to go back to an older, say liberal concept of the state that reaffirms these values of separation from society in order to steer it. We already have seen a profound merging of the social, the political and the economic and that it is accelerating today. And we´ve also seen the reorganization of the boundaries of the political nation state form and so on.  And not just in processes of globalization but in new processes of information technology that also have transformed those boundaries. But at a very minimum we have to recognize that the political world is organized in ways that are not completely parallel to what we call the nation state. That’s a given in any sophisticated theoretical world even though its not always understood at that level. This is an important question which is to say not how can we use the political realm, the public sphere and so on to somehow produce new relationships to technology or to resist some of the more difficult implications that we are now seeing. That is a mistake because we are still operating within older concepts and older vocabularies of the political and the social. We really do have to think about and also study: What are the new forms – if there are any – of the public that are arising today? By "the public" I mean a place that is logic, is organized by the common interest or these common interests, rather than the logic of economy, a particular community, the logic of administration, what Foucault would call "biopower". Is there a new concept of the political for the age of AI, for the modern technical digital world? Are there places where decisions can be made that are organized in a way that is defined by community rather that by the specific logics of these algorithmic processes? So these are not neutral technologies or ones that we respond to but they build communities. So the real question of the political in the 21th century is going to be: What are the lines of division that will really be spaces for decisions in this sphere of technology. Because they will have to be made, it's about whether they are going to be made automatically by systems or whether they will be made by true collectives. We seem caught in a kind of technical dystopia or a kind of reversion to nationalist collective ideas of decision. And technology is producing an acceleration of both of these tendencies. The reason I´m optimistic is that I think that it is entirely not only possible but becoming increasingly clear for historians and theorists of the digital revolution that there are alternatives, that automaticity doesn’t have to be compared or only contrasted only with freedom or that decision doesn’t mean some kind of eruption of the miracle in a way that Schmidt in one point defined it. And that’s a very difficult concept but I am optimistic that we can think about decision in ways that reduce the metaphysical implications but also resist the technical evolution of automatic decisionmaking. And it will have to be something along the lines of where do we gain access to a logic that does not define itself in the way that these other systems do. That’s really a difficult question but I think it’s the one that we need to ask.

Informationen zum Video

Interview aufgenommen am 30.05.2018 in London

Interviewer: Michael Tiemann

Kamera, Ton: Olaf Kuzniar

Team vor Ort: Robert Helmrich, Olaf Kuzniar, Michael Tiemann

Produktion: überRot GmbH

Der Inhalt steht unter der Creative Commons-Lizenz 4.0 International CC BY-NC-ND 4.0 (mehr dazu bei www.bibb.de/cc-lizenz).

The New Conflict of the Faculties and Functions: Quasi-Causality and Serendipity in the Anthropocene

Stiegler, Bernard | In: qui parle 26, S. 79-99 | 2017

Reprogramming Decisionism

Parisi, Luciana | In: e-flux 85 | 2017

weiterlesen

One Life Only: Biological Resistance, Political Resistance

Malabou, Catherine | In: Critical inquiry 42, S. 429-438 | 2016

Algorithmic Catastrophe - the Revenge of Contingency

Hui, Yuk | In: Parrhesia: a Journal of Critical Philosophy 23, S. 122-143 | 2015

The Cybernetic Hypotheses

Galloway, Alexander | In: differences 25, S. 107-131 | 2014

Automatisation et erreur

Bates, David | In: Stiegler, Bernard (Hrsg.): La verité du numérique. Paris, S. 23-38 | 2018

Plasticity, Automaticity, and the Deviant Origins of Artificial Intelligence

Bates, David | In: Bates, David; Bassiri, Nima (Hrsg.): Plasticity and Pathology: On the Formation of Neural Subjects. New York, S. 194-218 | 2015

Insight in the Age of Automation

Bates, David | In: McMahon, Darrin; Chaplin, Joyce (Hrsg.): Genealogies of Genius. New York and London, S. 153-168 | 2015

Cartesian Robotics

Bates, David | In: Representations 124, S. 43-68 | 2013