Tag: AI

Thoughts on ‘The Fundamental Principles of Communist Production and Distribution’ (long read)

DEBATE 

30 July 2025

This article from Angry Workers has some pretty interesting things to say that go the heart of the politics of worker cooperation. It’s framed as a critique of a text written in the 1930s by people from the Dutch and German left, and if you can bear with its ‘Marxiness’ there’s a lot of food for thought. I love this dig: “As a side note, I don’t think it is by chance that the council communist tendency had a fair share of astronomers in the past and software programmers in the present, people who appreciate closed systems.” Reposted by Sion Whellens

In order to guide our day-to-day political activity and medium-term organisational strategies we need a general understanding of what a working class revolution in the 21st century could look like and what the immediate steps of transformation from a capitalist to a communist mode of production are.

In the current moment, the chaos and drift towards destruction of the existing system forces a lot of people to reconsider the question of transformation and alternatives. These theories are closely tied to their practice. People who predict a collapse rather than a social revolution propose ‘leftist prepping’, people who believe that companies like Walmart already contain the basic framework for a socialist planned economy propose the nationalisation under a leftist government.  

For comrades who assume that the ‘emancipation of the working class must be the deed of the workers themselves’ there are fewer theoretical elaborations out there. Those that have been circulated recently, such as ‘The Contours of the World Commune’ or ‘Forest and Factory’, are influenced by ‘The Fundamental Principles of Communist Production and Distribution’, written in the early 1930s by the Group of International Communists (GIC). The thorough and systematic argumentation of the text still makes it the main reference point and a theoretical basis for new initiatives. 

The text was written as a response to the situation in the Soviet Union, where after a failed chaotic attempt to introduce a money-free economy during war communism, the state re-introduced both money and wage labour. Given that the state had systematically undermined the power of workers’ councils, it lacked input from the immediate sphere of production, which led to a planning system from above that was not only exploitative and oppressive, but also ineffective. Despite all propaganda, the fact that the dictatorship of the proletariat had turned into a dictatorship over the proletariat spread political despair amongst worker communists around the globe.

On the other hand, and this might be even more fruitful for the debate within our milieu, the comrades criticise the alternatives to central planning that have been formulated by libertarian communists and anarcho-syndicalists. The GIC criticises the libertarian idea of random take-overs of factories and the idea of localised self-management, which then, somehow, has to form a federal structure of decision-making. The anarcho-syndicalists get the stick for their egomaniacal thinking that the new society will be structured through the industrial unions of their own organisation.   

For the comrades the crux of the matter with both the state communist and the libertarian communist economic models is that they hinge on personal decision-making. In the Soviet Union economic planning is done by members of central commissions from the top, which disempowers the producers. In the libertarian communist version the decision-making by local assemblies and factory councils will either not join up to a social whole, or re-create a libertarian version of a federalised bureaucracy. 

Instead they propose a de-personalised system of general principles in the form of labour-time accounting. Every individual and every productive enterprise relates to the social production process through a transparent flow, or exchange, of labour-time. This form of open book-keeping can then be the basis for social decision-making, e.g. do we reduce our working hours now or do we work more over the short-term in order to build certain infrastructures that can help us reduce working hours even more in five years time. They claim that this de-personalised system solves the tension between autonomy and individual needs on one side, and the general interest and need for social planning on the other.

I think the text is still the main reference point for our debate for a good reason. It is non-utopian, in the sense that it derives its communist principles from the material conditions that are already given through the process of concentration and socialisation of labour in capitalism. I have two main criticisms of the text:

Firstly, rather than principles of communist production the text describes principles of circulation. It seems that for the GIC a ‘communist mode of production’ is mainly characterised through the absence of the capitalist forms of circulation, namely commodities and money, and a change in the formal ownership. In the text, workers are given an equal amount of labour time vouchers, but they still seem to be attached to either manual or intellectual jobs. It remains unclear whether the comrades think that the material form of production itself has to change, e.g. the various forms of division of labour (intellectual vs. manual, town vs. countryside, production vs. reproduction) or the form of technology. With Marx we can say that these material divisions are the main reason why capital or money, which are products of social labour, can appear as an alien, self-sustaining power. A communist mode of production would have to change the division of labour fundamentally in order to create the material basis for a true participation of everyone in the social process of decision making. If I am reduced to a particular repetitive job, I might have a formally equal ‘right’ to take part in wider decision making processes, but I will always lack the actual insights to do so. 

Secondly, the text remains opaque about the question of how to come to wider political decisions, e.g. of how to deal with conflicts between particular and general interests. The fact that a political class has taken power over workers in the Soviet Union seems to push them into thinking that you can solve the issue of political power by delegating social decisions to an ‘economic’ system of measurement and circulation, based on a new legal system. Not only does this seem to perpetuate the bourgeois division between the political and the economic sphere, it also seems to reproduce a certain fetish of the independent power of ‘the movement of things’ and laws. This derives, as a consequence, from their lack of clarity concerning the need for actual changes of the form of production. If I can’t explain why workers have actual control over a production process, e.g. because the strict division between manual and intellectual has been abolished, then I have to give them a legal right to do so. If wider society has no actual control over what is happening within an enterprise, e.g. because there is a rotation of workers between various production processes, then I have to resort to the legal right of access. The problem is that these legal rights stand on sandy ground if they are not expressions of actual human activity. 

In this sense the text reflects the debates of its time: how can economic planning not only be effective, but also maintain individual freedom? How can you, for example, encourage a large number of people, if that should be necessary after the revolution, to shift from their marketing job in front of flat screens to some hands-on work on tomato plantations? It seems that similar to the bourgeois theoreticians who they quote, prominently amongst them Ludwig von Mises, they hope that a certain ‘invisible hand’ of labour-time accounting can solve the puzzle. Given the two alternatives they see, the dictatorship of the supreme council or anarchist bricolage, this hope is understandable.

I think their model can serve as a general framework for a transitional phase after the destruction of the bourgeois state and the money economy, while the political focus has to be on the subsequent material transformation of the global production system. We will need an accurate system of bookkeeping in order to understand what productive legacy we have inherited and in order to discuss future social priorities. At the same time, the labour time accounting system has some in-built risks of becoming either a draining bureaucratic effort or a low-level economic fetish that might make people believe that they don’t have to take on certain things head-on politically. In the following I want to exemplify some of the arguments, using quotes from the text.

  1. General concepts
  2. Autonomy vs. social interest
  3. Individual labour time and individual consumption
  4. Accounting problems
  5. Impact on consciousness
  6. Revolution and transition

————————

  1. General concepts

There is a certain vagueness when it comes to the use of ‘economic’ and ‘political’:

“So, this book can never replace this class struggle. It only wants to express economically what will happen politically.” (p.15)

It seems that the comrades equate ‘political’ with an external force and ‘economic’ with the level of working class influence. While this is true for capitalist relations, it seems that they reproduce this distinction when talking about a post-capitalist social formation:

“Since working time is the measure for the distribution of social products, the entire distribution falls outside any "politics".” (p.216)

In order to defend the ‘economic sphere’ and thereby workers’ autonomy from the possibility of political domination or the necessity of personal intervention, they describe the system of labour time accounting as a kind of self-regulating entity:

“The objective course of operational life decides itself how much product is returned to the production system and how much each employee receives for consumption. It is the self-movement of operational life.” (p.216 - emphasis by GIC)

“We are not "inventing" a "communist system". We only examine the conditions under which the central category - the average working hour in society - can be introduced. If this is not possible, then the exact relationship of producer to total product can no longer be maintained, then the distribution is no longer determined by the objective course of the production apparatus, then we get a distribution by persons to persons, then producers and consumers can no longer determine the course of the operational life, but then this is shifted to the dictatorial power of the "central organs", then the state enters the operational life with "democracy", then state capitalism is inevitable.” (p.83)

“In the association of free and equal producers, the control of production is not carried out by persons or instances, but it is guided by the public registration of the factual course of operational life. That is, production is controlled by reproduction.” (p.253 - emphasis by GIC)

As already mentioned, the GIC does not analyse how the form of production itself creates the domination of capital, nor do they base the control of workers over the communist production process on a material change. This means that the control – either by capital, or by the workers – is primarily explained by a legal right:

“The right of disposal over the means of production, exercised by the ruling class, brings the working class into a relationship of dependence on capital.” (p.22)

“This abolition can only consist in the abolition of the separation of work and the work product, that the right of disposal over the work product and therefore also over the means of production is again given to the workers.” (p.26 - emphasise by GIC)

“The abolition of the market is in the Marxist sense nothing more than the result of the new legal relations.” (p.206)

According to GIC the working class has to impose, through a political act, a new legal order and economic principles that make further political interventions unnecessary. Perhaps in a transitional period, when the production process is still largely determined by its capitalist heritage, such kind of ‘guiding principles’ are necessary for a general orientation and in order to stabilise reproduction. In the long run it will be too weak a foundation to base the control of producers just on a legal declaration and an egalitarian system of distribution.

  1. Autonomy vs. social interest

The main social agents in terms of decision-making that the text refers to are the ‘operational organisations’, something like company councils, and ‘consumer cooperatives’. The GIC says that not the formal ownership of the means of production is decisive for the question of the emancipation of the producers, but who decides about the product of labour.

“It is not some Supreme Economic Council, but the producers themselves, who must have the disposal of the work product through their operational organizations.” (p.55)

“After this preliminary orientation on our topic, in which we have identified as characteristics of communist operational life the self-management by the operational organizations with an exact relationship from producer to product based on working time accounting…” (p.73, emphasis by GIC)

At the same time GIC is aware of the problem of self-management in the classical sense, meaning that workers ‘own’ their company and their product and ‘trade’ it on the market. 

“The type of syndicalism that seeks "free" disposal of operation must, therefore, be seriously combated.” (p.81)

They are adamant that the operational organisations don’t own their company, but that they produce for society and that the labour accounting system forces them to balance the books: they have to show wider society how much they have consumed in terms of social labour time (raw material, machines, living labour) and how much they have produced. Although there is no buying and selling there are transparent ‘exchanges’ of labour time. 

“Thus, as a compelling demand of the proletarian revolution, it turns out that all operational organizations are obliged to calculate for the products produced by them how much socially average working time they have taken up in production, and at the same time to pass on their product according to this "price" to the other operations or to the consumers. (...) ‘They are given the right’ (corrected translation) to receive the same amount of social work in the form of other products in order to be able to continue the production process in the same way.” (p.57)

In the Marxist sense, however, the new legal relationship is that the operations belong to the community. Machines and raw materials are social goods controlled by the workers and entrusted to the workers responsible for production management. This directly means that the community must also have control over the proper management of its products. However, libertarian communism firmly rejects such control, since the workers are then again "no bosses in their own house". (p.86)

“In the association of free and equal producers based on the calculation of working hours, control is of a completely different nature, because we are dealing with different legal relationships here. The workers receive the buildings, machines, and raw materials from the community to produce new goods for the community. Each operational unit thus forms a collective legal entity which is responsible to the community for its management." (p.252 - emphasis by GIC)

As seen earlier, the mere referral to ‘new legal relationships’ when it comes to the relationship of the community to the operational organisations is weak – the community and the productive sphere will have to merge in much more material forms, e.g. rotation of jobs, in order to guarantee control.

This leaves at least two questions open: what does the autonomy of these main organisations of the working class actually consist of and how does society decide about wider social aims, such as the expansion of production.

The first question of the degree of autonomy is difficult to answer, and the comrades of GIC do not help us much. For example, they don’t even mention an ‘ideal size’ for the operational organisations, despite the fact that this is decisive. In terms of transparency and social control, operational organisations could clearly be too big. If a single organisation would include various production steps, for example like old car plants did (from steel rolling to rubber production) then we only see one large number of labour time going in and one coming out. In a way capitalism has a similar issue, for example with companies like the NHS with 1.4 million employees. For managers to have more control over effectiveness and productivity they introduced an ‘internal market’ in the early 1990s. Now every department had to ‘buy’ services from other departments. This increased the control of managers, but it also bloated the bureaucracy – allegedly 10% of labour within the NHS is just due to the additional tasks of organising intra-company transactions. It is not that communism according to the GIC’s principles would be free from this problem. The smaller the units, the more transactions have to be recorded and the larger the social ‘expenditure’ on unproductive accounting labour. The issue is that the work process actually remains exactly the same, it’s just a question where you draw an ‘accounting boundary’. But these are not ‘economic’ questions, in the end they are a question of political control – and it seems that GIC wants to hide this question behind a seemingly impersonal system, similar to the seemingly impersonal force of the market.

“It is certainly a bitter irony that bourgeois economists, in particular, have made good progress in the science of communism, unless unintentionally. When it appeared that the downfall of capitalism had come within reach and communism seemed to conquer the world by storm, Max Weber and Ludwig Mises began their criticism of this communism, whereby of course first and foremost Hilferding’s "General Cartel", that is Russian communism, had to suffer.” (p.78)

We can later on see how this ‘non-capitalist market’ impacts on the consciousness even of the authors of the text.

The second question on who makes the wider social decisions is kind of fudged in the text. In general, the ‘system of book-keeping’ seems to be self-regulatory, with the occasional nudge from the operational organisations, a kind of cybernetic entity. As a side note, I don’t think it is by chance that the council communist tendency had a fair share of astronomers in the past and software programmers in the present, people who appreciate closed systems. But the comrades are aware that somehow wider decisions have to be made. So they finally introduce on page 220 a kind of social authority, the ‘general congress of works councils’ – pretty much out of the blue, without further explanation or mentioning:

“However, the expansion of the operational unit can not take place arbitrarily, as in this case there can be no question of a social production system. The general congress of works councils will, therefore, have to set a certain general standard within which the expansion must take place. For example, congress can stipulate that the operational unit may not be  expanded by more than 10% of the means of production and raw materials. This simple decision will then regulate the entire economic life as far as the expansion of the operational units is concerned… without the producers becoming dependent on a central economic authority.” (p.220 - emphasis by GIC)

This council also has the say when it comes to wider decisions, such as the construction of railways:

“This kind of expansion of production absorbs a significant proportion of the social product, from which it follows that an important part of the discussions at the economic congresses of the worker's counsels (sic) must deal with the questions to what extent these works should be initiated and which ones are the most urgent.” (p.225)

Fair enough, it is not surprising that the GIC assumes that it will need some more centralised institutions in order to come to wider social decisions, but at the same time their idea that a combination of cybernetic book-keeping and rank-and-file organisations can form an alternative to Soviet Union style planning relied on their absence:

“In our considerations, we have consistently adhered to the economic laws. As far as the organizational structure was concerned, we only referred to the operational organizations and cooperatives.” (p.284)

After having taken the ‘general councils’ out of the picture again, they introduce a ‘centre’ a couple of pages later:

“From general social accounting, however, economic life is an uninterrupted whole, and we have a center from which production, although not controlled and managed, can undoubtedly be monitored.” (p.288)

This means that the relation between ‘general council’ and ‘centre’ on one hand and the autonomy of operational organisations remains undefined. They seem to see the problem, too, and use ‘legal rights’ to guarantee, or fudge, that autonomy:

“In any case, it is essential that the operational organizations ensure that they have the right to extend if this is necessary to meet demand.” (p.222 - emphasis by GIC)

  1. Individual labour time and individual consumption 

In other left-communist criticisms of the ‘Principles’ one main focus has been the fact that they link individual labour time to individual consumption levels. The criticism has been that this would sustain a ‘coercion to work’ or value production. I don’t think it would sustain value production in any exploitative or alienating sense and I don’t think that it is wrong to encourage everyone to do their share of work. My problem with the text’s strong focus on individual consumption is that it seems to take the previously mentioned bourgeois economists at face value, who tell us that individual consumption and needs are society’s main driving force. The GIC comrades transfer this onto the communist society:

“The process of growth from "taking according to needs", moves within fixed limits and is a conscious action of society. In contrast, the speed of growth is mainly determined by the "level of development" of consumers. The faster they learn to economize with the social product, i.e., not to consume it unnecessarily, the faster the distribution will be socialized.” (p.180)

This means that social ‘effectiveness’ is determined by consumption, rather than by an increase in social productivity, e.g. through an explosion of creativity and new forms of collaboration. 

“The needs are, therefore, the driving force and the guideline of communist production. Or, as we can also say, production is geared to "demand".” (p.211 - emphasis by GIC)

While communism, unlike capitalism, is not ‘production for production’s sake’, we can still expect that new needs and dynamics will primarily emerge from a new creative cooperation amongst people, rather than their changed consumption patterns. Their focus on consumption matches their neglect of the question how production must change in concrete terms in order to become a communist mode of production.

The ‘system’ cannot replace direct social engagement

The discussion whether the individual labour time accounting enforces an ‘individual coercion to work’ does not seem so interesting to me, the question is rather, if they are not avoiding the issue of coercion by transferring it onto an economic dynamic! “I won’t get involved if the other guy is a slacker, the voucher system will do it.” I am not sure what is more communist, if a collective tells individual members to get their act together or to leave this task to an apparatus. And the apparatus will only register the time worked, but if your comrade pisses about for an hour and wants to have it counted, you will still have to tell them. We could also argue the other way around. Do we want to encourage that particular people can work loads of ‘overtime’ in order to be able to ‘afford’ a particularly luxurious diet to which they invite selected members of the collective in order to improve their social status? Again, I think this is a secondary matter. More important is the fact that through the individual form of consumption, a possible lack of social productivity is not mainly experienced as a collective issue, but as a lack of individual purchasing power.

Workforces have no interest in productivity increases

But perhaps more interesting than thinking about individual behaviour would be to discuss what impact the system might have on an entire workforce. The system of ‘payment by labour time’ means that a workforce, if it would continue to exist as a separate entity, has no interest in increasing productivity: they are paid by the hour, not by output. The only way that the GIC comrades address this issue is by ‘comparison’ (competition) – using the example of three different workplaces that all produce shoes, unit 1 and 3 producing more productively than unit 2:

“If the shoes are charged with 3.18 hours in consumption, then the operational units 1 and 3 have hours "over" in the accounting, which correspond to the "deficit" in the accounts of unit 2.” (p.136)

The question here is if it will be mainly social pressure that will force the workers of unit 2 to produce within the average productivity range or whether the ‘deficit’ in the account will exert the pressure – it is unclear what that ‘deficit’ means exactly. The next question would obviously be whether productivity can be compared like that and what would happen if there are no comparable units.

The division between simple and complex labour persists

As mentioned, when it comes to individual labour the main issue is not necessarily that it is paid differently, but that some people are supposed to sweep roads all day, while others develop machinery. The comrades criticise sharply that workers receive different amounts of money or working time vouchers for the work they are doing, but otherwise they mainly appeal that skilled workers should not look down on unskilled workers – instead of demanding that communism does away with this division:

“We are familiar with this ideology, which makes the skilled look contemptuously at the unskilled (...) a doctor is not a garbage collector. The extent to which the workers change this ideology in the course of the revolution remains to be seen.” (p.152)

“The working class must fight with the greatest energy against such a view and demand the same share of social wealth for all.” (p.117)

It also ignores the issue of how to counter the tendency of intellectual workers to blackmail post-revolutionary society to pay them more, due to dependency on their ‘expertise’ (for example surgeons in Russia or Cuba etc.). If I don’t want to bribe them with extra-vouchers I need a different plan to collectivise their knowledge.

  1. Accounting problems

The claim of the GIC is that for the labour time accounting system to be transparent and allow everyone to take part in the planning of production it must ‘add up’, meaning, every transaction of labour time, either within production chains or of final consumption, has to be recorded. I wonder whether a) the aim of ‘balancing the books’ can get in the way of social needs and b) whether the recording of transactions is actually possible given the complexity of social interactions.

“And since it is one of the "lay idea" of capitalism as well as of communism, when one believes that goods can be transferred without charging, the receiving operational unit must "charge" the incoming goods against the supplying operational unit.” (p.185)

Perhaps, in order to guarantee social reproduction, a particular enterprise (perhaps agriculture, perhaps mining) requires an enormous input of social labour time, but cannot ‘balance the books’, meaning that it will always have the exact amount of ‘hours in the bank’ in order to continue production. For the GIC this is the main form of social control: you have to produce within your means, because the system has an inbuilt justice of ‘equal exchange’ – but does that actually work out? Again, it is good to have a transparent public accounting system that manages to allocate labour and resources – but the main issue will still be the political debate: Should we ‘substitute’ this or that enterprise, because it is socially necessary? Should we confront the guys who work in the shoe factory, because they have been wasting resources? 

“Each company reproduces itself. And thus, the entire social economic life is reproduced.” (p.113 - emphasis by GIC)

This is of course a quite compelling logic, not too different from a market logic. But does it not also have potentially similar consequences in terms of the consciousness of workers who beaver away within the companies: “As long as our books look alright and we won’t get a bollocking in the general council, things are cool. Why bother about the wider social production cycle?”.

There are further tendencies and factors which make an accurate accounting more and more difficult, some of which have been mentioned in other critiques, e.g. the question how to account time spent on innovations that impact on millions of products, such as the introduction of industrial norms. Another example is the inbuilt potential of re-creating regional unevenness in income and development:

“For example, if the workers in one district want to set up several public reading rooms, they can do so without further ado. New institutions are then added, which have a more local significance so that the necessary costs must also be borne by the district concerned. For this district, the payout factor will be changed, which has the effect of a "local tax".” (p.180)

In addition to the operational organisations and the places of final consumption here they introduce another accounting unit, the ‘district’. These districts might have their own ‘reading rooms’ but they still depend on wider social production, which will make the calculations enormously complex. But this is more than just a technical challenge, it is a political one: would workers who live in a different district, where their ‘payout factor’ is higher, not be allowed to use the reading room? What about long-term consequences, e.g. some districts or regions invested loads in education, for example reading rooms, other districts or regions just ‘spend’ all labour time on good food. Won’t that recreate social imbalances?

In response to the political decisions in the early Soviet Union to nationalise only those companies that are ‘ripe’ for socialism they say:

“In the Marxist sense, there are no "ripe" or "not ripe" enterprises, but society as a whole is ripe for communism.” (p.34)

Are they not avoiding a thorny issue that sneaks into their own model? What about the question, which enterprises produce a ‘free’ good and which ones (still) have to produce in exchange for labour time vouchers? Is that not also a question of ‘being ripe’ for a different level of social production, meaning, some companies are ‘ripe’ for a production of ‘everyone according to their needs’, while others have to stick to production in return for vouchers?

“With the growth of communism, this type of operation [enterprise] will probably be expanded more and more, so that also food supply, personal transport (this is also individual consumption!), housing service, etc., in short: the satisfaction of general needs, will come to stand on this ground.” (p.178)

  1. Impact on the consciousness

At various points in the text the authors say that there is no value produced in communism, but that the measure of labour-time embodied in products has similarities to value. In order not to use the word they call it ‘production time’. 

“In fact, this is a transformation of concepts, as we have seen previously in terms of value, income, and expenditure, etc. And just as language will preserve all these old names for the time being, it will also preserve the name "market". (...) The abolition of the market can, therefore, be understood to mean that it continues to exist under communism, according to its external appearance.” (p.208 - emphasis by GIC)

That all this is not only about semantics – whether to call things ‘exchange’ or flow – or appearances can be seen in the following examples, where it seems that the capitalist logic still rules the minds of the authors. About whether a company hands out their products in exchange for vouchers or without vouchers they say:

“Of course, it must always be considered in advance whether such a distribution for a particular sector does not involve too great a sacrifice for society.” (p.178)

It is interesting that they call it ‘sacrifice’, as the goods and services that are handed out without exchange of labour vouchers are as much based on social labour as those who aren’t. In this sense society doesn’t have to ‘sacrifice’ anything, e.g. people don’t have to work more or tighten their belts, society has only less of a control over consumption. Meaning, the ideology or consciousness of “oh, this is given for free, but someone surely has to pay for this” also remains in the heads of the authors. This seems also to be the case when they write about ‘hardship funds’ for emergencies, such as natural catastrophes:

“Under communism, this type of hardship will have to be borne by the whole of society, so it is natural that a "general fund" should be set up with the help of the payout factor. The speed with which this stockpiling is carried out is in the hands of the councils, which must determine the amount of this fund at the congresses.” (p.227)

How would this ‘stock-piling’ actually work? Do they talk about producing additional rescue vehicles? Do they say that each district should calculate a margin that in case of an emergency a certain amount of labour can be withdrawn from general production? Both would not really constitute a ‘fund’. So it seems that they think that a kind of accumulated ‘fund’ of labour-time could sit somewhere that could be tapped into in times of an emergency – again, this is a capitalist logic of money accumulation.

To give another example of how an external ‘accounting system’ can negatively change the consciousness of workers, I will talk about our hospital. From the Emergency Department (ED), where patients are admitted, to the discharge process on the wards, there is a constant bombardment with ‘targets’: patients should not stay in ED longer than a certain amount of time; they should be ‘treated’ according to certain ‘evidence based’ standards, e.g. official sepsis screening time-frames; they should be discharged once certain criteria are met. All this is not mediated through value, money or profits, although ‘saving money’ is a compulsion in the background. Workers, in particular workers in ‘responsible’ positions, sometimes focus more on these figures, the ‘patient flow’, than the concrete conditions of patients. The internal ‘accounting system’ of the NHS creates its own alienation.

  1. Revolution and transition

Their ‘economistic’ understanding of workers’ power also influences the way in which they describe the revolutionary period:

The economic dictatorship of the Proletariat -  Finally, we must say a few words about the dictatorship of the proletariat. This dictatorship is self-evident to us and does not really need special treatment, because the introduction of communist economic life is nothing other than the dictatorship of the proletariat.” (p.273 - emphasis by GIC)

“It is also a dictatorship which is not carried out by bayonet, but by the economic laws of the movement of communism. It is not "the state" that carries out this economic dictatorship, but something more powerful than the state: the laws of economic movement.” (p.276)

We can agree that a working class revolution is not primarily a civil war which is won militarily. It is true that the main weapon of the working class is the social production process itself, although this is different from ‘the laws of economic movement’. Still, there seem to be certain white spots when it comes to the necessity of concerted political intervention even after the revolution has succeeded. Here the main challenge won’t be transparent book-keeping, that might be the easiest part.

For any revolutionary strategy we need to know which social and material changes can be achieved during a class movement and revolutionary process itself and which changes can only take place when the working class has taken power. We have to know what can be done within the first 100 days of proletarian dictatorship and what needs a longer period.

A revolution in terms of active struggle with the class enemy is necessarily a temporary affair, there is a certain time-window within which the question of power has to be solved. It is true that the revolutionary process itself will dismantle a lot of capitalist divisions within the production process, e.g. in terms of socialisation of knowledge or changes from small-scale domestic reproduction to collective forms. We can call this ‘communisation’, but it is limited in terms of scope.

Other changes will necessarily need much longer than the immediate period of revolutionary upheaval, due to their material nature. This means that we will deal with the material legacy of capitalism – and the potential that these material structures, which still form part of our social reproduction, re-impose social hierarchies. To name a few:

a) The division between town and countryside. It will probably need a generation or more in order to dismantle the large urban concentrations and to re-populate the countryside – in a way which does not reproduce rural poverty and idiocy. Even more so if this process is not supposed to have a character like the Great Leap Forward etc.

b) The division between different regional stages of development. Capitalist hierarchy produces and sustains itself by regional disparity in the development of the forces of production. This also includes regions that are naturally blessed by good climate or fertile soil.

c) The reparation of nature that has been exhausted by the capitalist mode of production and the extra labour due to the move away from fossil fuels.

In the actual moment of revolutionary upheaval there is a lot of enthusiasm for social change, but it is not guaranteed that this enthusiasm will be generalised and expanded forever. To change the material conditions mentioned above will require an extra-amount of social labour during a period of transition. 

Looking at historical examples it is not unlikely that, e.g. regions that are privileged in terms of their inherited productive structure or land fertility will be less inclined to make an extra-effort to even-out global disparity; or that in order to guarantee a better living standard short-term, necessary reparations of nature are postponed and future generations left to deal with it. It is not absurd to assume that it will need a strong internationalist communist force and perspective, that galvanised during the time of revolution, to ‘encourage’ that these necessary material changes are undertaken, with the aim to create the basis for a global human community.

The open question is what form this communist force takes and how it relates to wider society. I don’t imagine a Communist Party in the old sense nor a workers’ state. I assume that the challenge will be to instill a communist core in those industries that are primarily concerned when it comes to the material transition: large scale manufacturing, transport, energy, agriculture etc.. It will be this central working class that will have to pull the rest of society through this period of transition – not because workers in these industries are by and in themselves prone to have a higher degree of consciousness, but because these industries are structurally the most socialised and global. If the communist project has a material base, it is there – though it will also always need external proletarian pressure to socialise.

The text by the GIC does not really prepare us for these political tasks. It is a valuable framework to stabilise social reproduction, but it runs the risk to make workers believe that with the establishment of an equal system of distribution the ‘deed is done’ and no persistent political struggle for an internationalist, feminist and sustainable construction of communism is necessary even years after the revolution. Our task would be to debate and update the text and integrate it into a wider political strategy.

Tags:

What’s behind the hype about AI? [long read]

We’re reposting this insightful article on artificial intelligence by the Wildcat collective in Germany, because the general debate about AI is dominated by both fearful demonisation and uncritical reverence when it comes to automation technologies and their implications for workers. Wildcat’s article was subsequently translated and published by Angry Workers, whose own introduction we paraphrase here. 

Within workers’ circles, as in society at large, the separation between social and political critics on one side and technical ‘experts’ on the other is deepening. We faced a similar, and indeed fatal, separation during the pandemic, when commentary was largely divided between people who neglected the medical and scientific aspects and primarily focussed on the state’s attempts to use the lockdown as a way to repress any form of discontent – and people who uncritically supported the state measures due to the reliance on medical experts. This reliance is real. In the case of the pandemic it would have needed the collaboration between working class communities and patients who can report about the immediate impact of the pandemic, the nurses and health care workers who can assess the outcome of medical decisions within the hospitals,  workers in the global medical industries and research departments who are critical of the disjointed response of the state, which bowed to diverging national and corporate interests. 

Such a collaboration can only establish itself as a movement that defends working class interests, questions the manual and intellectual hierarchies among us, and takes on the responsibility to develop a working class plan as a social alternative. It takes a collective effort of organisation to undermine the separation between social critique and scientific knowledge.

We can afford neither a wholesale rejection of technology, nor an instrumentalist affirmation a la ‘fully automated communism’. For workers’ counter-engineering!

Capitalist Intelligence

(from: Wildcat 112, Autumn 2023 translated by Angry Workers, November 2023)

“Future generations would then have the opportunity to see in amazement how one caste, by making it possible to say what it had to say to the entire world, made it possible at the same time for the world to see that actually it had nothing to say.” (Bertolt Brecht Radio Theory)

On the 30th of November 2022, ChatGPT, a conversational AI, or as it is known in the jargon ‘large language model’, was released. For the first time, a generative AI that can create independent texts and pretend to understand the questions it is asked was publicly available free of charge. Within five days, one million people had registered on the chat.openai.com website. By January 2023, this figure had risen to one hundred million. It was a stroke of genius for OpenAI (Microsoft) to make its chatbot publicly accessible. No marketing department could have advertised it better than the hysterical debate that ensued. All competitors had to follow suit and also publish chatbots.

The open letter from the 22nd of March 2023 calling for a six-month moratorium on AI development was a major publicity stunt for the entire industry. The signatories were a who’s who of Silicon Valley. (By the way, calling for regulation is also the usual way of keeping smaller competitors out). They formulated their demands as questions. The first question: should we let the machines flood our information channels with propaganda and falsehoods? This was asked by Musk, who had just sacked all the moderators on Twitter and cancelled the EU’s voluntary code of conduct against disinformation only a week before the open letter. On the 17th of April, Musk announced that he had founded his own AI company, X.AI, at the beginning of March and wanted to create a large language model with TruthGPT that was not as ‘politically correct’ as ChatGPT.

At the end of May the chatbot elite, together with a few artists and Taiwan’s Digital Minister Audrey Tang, even warned of “the extinction of humanity through AI”. They put AI on a par with pandemics and nuclear war (mind you: not with the climate crisis, which they consider harmless). The signatories include Sam Altman (head of OpenAI), Demis Hassabis (head of Google DeepMind), Microsoft’s head of technology, and numerous AI experts from the world of research and business. It is hardly possible to imagine a more obscene form of advertising for your own product.

What’s behind the hype surrounding AI?

Why do we see such a boom in AI now? And why did chatbots, of all things, trigger it? Tech companies urgently needed a new business model. Language is seen as a sign of intelligence, and there is clearly a great social need for dialogue partners. Thirdly, while there are fewer and fewer fundamental innovations, the expectations for them are rising.

Tech crisis

Text, voice and image generators – ‘generative AI’ – are the bootstraps with which the five big tech companies Apple, Amazon, Facebook, Google and Microsoft are trying to pull themselves out of the ‘big tech’ crisis of 2021 and 2022, during which they laid off 200,000 employees. The big five dominate over 90 per cent of the AI market. A sixth company, Nvidia, is taking the biggest slice of the cake by providing the hardware. Nvidia used to produce graphics cards, and still does, but just over ten years ago it was discovered that graphics processing units (GPUs) have enormous parallel computing power. The first boom was in computer games, the second in cryptocurrency mining, and now it’s generative AI. GPUs are notorious for consuming large amounts of electricity.

After years of losses, the rise of the US stock markets in 2023 hinges on just seven companies (in addition to the companies mentioned above, Tesla is the seventh). In mid-July 2023, these seven companies accounted for 60 per cent of the Nasdaq 100, an important index for technology stocks. The boom is based on a single expectation: that “AI will change everything”. Economically, it has not yet been enough of a boom to offset the recession in the chip industry. Investments are being postponed in chip manufacturing because turnover and profits are collapsing. The manufacturers of memory chips are all reporting losses. Despite production cuts, Samsung’s operating profit fell by 95 per cent in the second quarter and by 80 per cent in the third. Qualcomm announced a fall in turnover of almost 23 per cent, and redundancies.

There is a social need for chatbots

Joseph Weizenbaum built the first chatbot in 1966. His ELIZA was already able to pretend to be human in short, written conversations. Weizenbaum was surprised that many people entrusted this relatively simple programme with their most intimate secrets. They were convinced that the ‘dialogue partner’ had a real understanding of their problems, because the answers to their questions seemed ‘human’. This so-called ‘Eliza effect’ is exploited by many chatbots today. An unwanted by product has become a bestseller and a business model.

Since 2017, the company Luka Inc. has been marketing its chatbot Replika as a “companion”, a substitute for a romantic friend. However, you still have to buy an upgrade for “romantic interactions”. There are women who can’t have children and create AI children. There are men who create a kind of AI harem, while abandoned people engage with chatbots for comfort and people who feel misunderstood find reassurance in the communication with them. In the US, the story of a woman who married her chatbot went viral in summer 2023. In spring, it made the news when a Belgian man committed suicide after having been counselled on how to do so by his chatbot.

Chatbots are trained on huge amounts of human dialogue data, and can therefore also parrot the expression of emotions. Not only Replika, but also ChatGPT and others seem to be designed as a kind of romance scammer. In order to feign understanding or for the sake of a good story, these models like to spontaneously invent sources and supposed facts. These “social hallucinations” (Emily Bender) are desirable and are used to build customer loyalty.

“You might ask yourself what kind of friends they are, who constantly assure you how great you are, who reply to even the most boring retelling of a confused dream: “Wow, that’s so fascinating”. Who, like well-behaved dogs, find nothing nicer than when you come home and greet them. On the other hand, users in forums and chats appreciate just that. … There may be good reasons not to make your happiness dependent on a real person. But if you share your life with an AI, you’re sharing sensitive data, not just with a smiling avatar, but with a tech company.” (“A husband to put aside”, German newspaper Süddeutsche, 1st of August 2023)

Many users do not understand that they are also feeding and training the AI with new data through their questions. In early 2023, Samsung discovered that programme code from its developers had been uploaded to ChatGTP. In the middle of the year, Samsung, Morgan Chase Bank, Verizon, Amazon, Walmart and others officially banned their employees from using chatbots on company computers. They are also not allowed to enter any company-related information or personal data into generative AI on their private computers.

Few real innovations

Hardly anyone still believes that the world will become a pleasant place in the foreseeable future. Ecological crises are piling up higher and higher, wars are getting closer and social problems are growing.

Perhaps this is why utopian energies are increasingly attached to technology, be it nuclear fusion, electric cars or AI. Yet capitalist technologies do not create a new world; they preserve the old one. Weizenbaum said in an interview in 1985 that the invention of the computer had primarily saved the status quo. His example: because the financial and banking system continued to swell, it was barely controllable by manual transfers and cheques. The computer solved this problem. Everything went on as before – only digitised, and therefore faster.

At the beginning of 2023, the magazine Nature published a study according to which “groundbreaking findings” have become less frequent. Earlier studies had already shown this in relation to the development of semiconductors and medicines, for example. Many things are just improvements to an invention that has already been made, not ‘real innovations’. Scientific and technological progress has slowed down despite the continued rise in spending on science and technology, and the significant increase in the number of knowledge workers. The article in Nature sees the cause as too much knowledge and too much specialisation. The amount of scientific and technical knowledge has increased by leaps and bounds in recent decades, and the scientific literature has doubled every 17 years. However, there is a big difference between the availability of knowledge and its actual use. Scientists are increasingly focussing on specific topics, and primarily cite themselves (Third-party funding, publish or perish). [1]

This mixed situation leads to constant talk of ‘technological breakthroughs’. Even if – as with mRNA – research has been going on for six decades, or – as with chatbots – they only utilise collateral effects that have been known for half a century.

“The AI seems reassuringly stupid to me

(German comedian Helge Schneider)

AI is everywhere. Especially in advertising. Smartphones and tablets sort photos by topic; they are unlocked using facial recognition; the railway uses image recognition for maintenance; financial service providers use machines to assess the risk of borrowers…

But these examples have nothing to do with generative AI. They are simply algorithms for big data analysis. For marketing reasons, everything that has to do with big data is currently labelled as AI. After all, even the simplest programming loop for data analysis can be sold more effectively this way. In the summer, the Hamburg-based start-up Circus raised money from investors. Its business idea: home delivery of meals that are “cooked by artificial intelligence depending on the customer’s preferences”.

There are also productive examples: a team has used AI to develop new proteins in pharmaceutical research. In chip production, self-learning systems save human rework. Amazon uses AI for predictive shipping, even though a classic probability calculation would be just as good.

The term ‘artificial intelligence’ was coined in the 1950s for advertising purposes, and it has also made what is understood by ‘intelligence’ compatible with capitalism.

In 1959, the electrical engineer Arthur Samuel wrote a programme for the board game checkers, which for the first time was able to play better than humans. The breakthrough was that Samuel taught an IBM mainframe computer to play against itself and record which move increased the chances of winning in which game situation. Machine playing against machine and learning in the process is the beginning of ‘artificial intelligence’ – artificial indeed, but why ‘intelligence’?

The term ‘artificial intelligence’ had been invented four years earlier by the US computer scientist John McCarthy. He was researching data processing alongside many others, including the cyberneticist Norbert Wiener. But McCarthy didn’t just want to follow in the footsteps of others. He wanted to collect the laurels for something of his own. So instead of ‘cybernetics’, he wrote ‘artificial intelligence’ in his application to the Rockefeller Foundation for funding for the Dartmouth Summer Research Project. “The seminar will be based on the assumption that, in principle, all aspects of learning and other features of intelligence can be described so precisely that a machine can be built to simulate these processes. The aim is to find out how machines can be made to use language….”. The application was approved – but not in full: the Rockefeller Foundation only paid 7,500 Dollars, so that around eight scientists could meet for a summer. The conference only lasted a month and was nothing more than an “extended brainstorming session” with no results. But today it is regarded as the beginning of AI and all participants became internationally renowned experts in artificial intelligence.

McCarthy later wrote that he wanted to use the term to “nail the flag to the mast.” But he was replacing intelligence with something else. The Latin word intellegere means “to realise, understand, grasp”. People become intelligent by grasping. ‘Intelligence’ arises in interaction with the environment (no cognition without a body) and in social interaction. People developed language so that they could cook together. The taste of chocolate and the smell of rosemary are qualitative experiences that cannot be stored as ‘data’. But McCarthy had shown the way: “simulation of these processes” – meaning, a simulation of understanding. [2] In the euphoric phase of the 1960s, AI researchers thought they could feed computers with sufficient data and interconnect them so skilfully that they would outperform the human brain. But disillusionment soon followed. The more we understood about the human brain, the clearer it became that it would never be possible to replicate it by a machine (almost 100 billion nerve cells, all interconnected by 5,800,000 kilometres of neural pathways…). The EU flagship project Human Brain has made no progress in this respect in ten years. [3]

A long ‘AI winter’ began in the early 1970s.

The victory of the IBM computer Deep Blue over the reigning world chess champion in 1997 was celebrated as another major appearance of ‘artificial intelligence’ on the world stage. However, Deep Blue was not an ‘artificially intelligent’ system that learnt from its mistakes. It was merely an extremely fast computer that could evaluate 200 million chess positions per second (brute force). More significant was AlphaGo’s victory over the world’s best Go player in 2016. The machine had previously played against itself many millions of times, and independently developed moves that no human had thought of before.

“Lies, damn lies – and statistics”
(Mark Twain)

McCarthy’s use of the term ‘neural networks’ in his proposal was an equally skilful advertising ploy. The term conjures up images of an artificial brain simulated with computer chips. But the ‘neural networks of AI’ bear no resemblance to the network of neurons in the brain. They are a statistical process used to arrange so-called ‘nodes’ in several layers. As a rule, a node is connected to a subset of nodes in the layer below. If you want a particular computer to be able to recognise horses, you feed it with many horse photos. From these, the system extracts a ‘feature set’: ears, eyes, hooves, short coat, etc. If it is then to assess a new image, the program proceeds hierarchically: the first layer analyses only brightness values, the next layer horizontal and vertical lines, the third circular shapes, the fourth eyes and so on. Only the last layer puts together an overall model.

The subsequent fine-tuning consists of praising the system when it has correctly recognised an image (the connections between the nodes are strengthened) or criticising it when it recognises a dog as a horse (the connections between the nodes are rearranged). In this way, the system becomes faster and more accurate – but without ever ‘understanding’ what a horse is.

Chatbots create language this way. They are neither the highest nor the most important, neither the most powerful nor the most dangerous, type of AI. When it comes to multiplying large numbers, they are inferior to any 70s pocket calculator. The technology behind so-called ‘generative AI’ is essentially based on statistical inference from huge amounts of data. Statistics is an auxiliary science. Economists, epidemiologists, sociologists, etc. apply statistics ‘intuitively’ in order to obtain an approximate orientation in certain contexts. They are aware that statistical predictions are rarely accurate; they make mistakes and sometimes lead to dead ends. Generative AI presents statistical predictions as a result. This is the basis of its performance. By definition, the models are not able to derive or justify their results. They are trained until the results fit.

You can’t tell an AI system that it has made a mistake: “Don’t do that again!” Because the system has no idea what ‘that’ is, or how to avoid it. AI systems based on machine learning and trained on the basis of vast amounts of data, rather than on general principles or rules of thumb, are not able to take advice.

A chatbot stitches together sequences of language forms from its training data without any reference to the meaning of the words. When ChatGPT is asked what Berlin is, it spits out that Berlin is the capital of Germany. Not because it has any idea what Berlin is, what a city is or where Germany is located, but because it is the statistically most likely answer.

Chatbots get dumber as they progress. This is partly because they are also fed with products from other chatbots during machine learning, and partly because poorly paid clickworkers sometimes use ChatGPT themselves for fine-tuning in order to generate supposedly handwritten texts faster. Only six months after its release, complaints began to mount that the performance of ChatGPT was becoming increasingly riddled with errors and poorer, with usage time dropping by ten per cent overall and downloads of this AI falling by 38 per cent. The AI industry reacts in its paradoxical but typical way: it further increases the amount of training data and parameters – despite the fact that data overload created the problem in the first place.

Big Data

It is pretty crazy to generate language by machine, based not on logical rules and meaning, but on how likely it is that one word or text module will follow another – because the process requires huge computer capacities, enormous power consumption and a lot of reworking. But it is precisely this insanity that is at the heart of the business model. Because only the big tech companies have such huge data centres, and they have accumulated the necessary data volumes and money over the last two decades. Large language models are therefore a business model in which nobody can compete with them; not even state research institutions or top international universities have the necessary computers, let alone the data!

Google, Facebook, Amazon etc. have captured the digital footprint of the entire human race. Google, for example, has used special crawlers to mine 1.56 trillion words from public dialogue data and web texts for its training data over the past twelve years. Crawlers are data suckers that capture everything on the public Internet. What was accepted for many years as data collection for advertising purposes can now no longer be reversed. Once training models have processed the data, it can no longer be deleted.

However, the chatbots’ training data not only includes the billions and billions of data that we have ‘voluntarily’ made available to them, but also copyright-protected texts. The AIs are also trained with databases that illegally make protected works available. Journalists from the US magazine The Atlantic searched the approximately 100 gigabyte Books3 database that feeds every artificial intelligence. As a result, they published a searchable database with around 183,000 titles with ISBNs on the 25th of September.

The same applies to the image generators: Billions of photos on the internet are the building material for the images in programmes such as Dall-E. Some of the photos have been created by professional photographers, which are easily scanned by the AI on their professional websites. Nobody asked them whether they agreed to this, let alone offered them any remuneration. They cannot prove whether their photos were used during the training of the artificial generators. By definition, it is not possible to reconstruct which individual photos were used to create a machine image.

Resource consumption

Perhaps the biggest problem with the current spread of generative AI in chatbots and image generators are their enormous resource consumption. In 2010, it was still possible to train an AI on a standard notebook, whereas today special computers with many thousands of GPUs are used for this purpose.

Energy

Twelve percent of global energy consumption is attributable to digital applications; just over half of this (six to eight percent) is accounted for by large data centres. They are barely keeping pace with the AI boom. According to the head of HPE, data centres could consume 20 percent of the world’s energy in five years’ time. Training AI models consumes more energy than any other computer work. This development has only really taken off since 2019, when GPT-2 was published; it worked with six billion parameters. The algorithm of GPT-3 comprises 175 billion parameters. GPT-4 works with 1.7 trillion parameters. Each new model increases the parameters by a factor of 10, but the energy consumption grows exponentially with the amount of data processed. The last training run of GPT-3 alone consumed 189 megawatt hours of energy, which corresponds to around nine times the annual CO₂ emissions per capita of Germany. And for every model that actually goes online, there were hundreds that were discarded beforehand.

But it’s not just the AI training – actually using these programs requires a lot more power. A single request of around 230 words requires 581 watt hours. The one billion requests made to ChatGPT in February 2023 would therefore have consumed 581 gigawatt hours. In May, it was already 1.9 billion just for ChatGPT. That corresponds to almost 464,000 tonnes of CO₂. And the energy hunger of the successor model GPT-4 is even greater. AI now consumes more electricity than crypto mining (Bitcoin’s electricity requirements were estimated at 120 terawatt hours in 2021).

In the old days – in 2016 – Google calculated that processing a search query consumed as much energy as lighting a 60-watt light bulb for 17 seconds. Google therefore consumed around 900 gigawatt hours of electricity for the approximately 3.3 trillion search queries per year at the time. This was equivalent to the power consumption of 300,000 households with two people, but was paltry compared to the power consumption of AI.

At a conference a year ago, it was stated that the energy required to train an AI had increased 18,000-fold in the past two years – this already took into account the energy savings from new chips. [4] On the 25th of September, the German daily newspaper FAZ reported: “Due to AI strategy: nuclear power to supply Microsoft’s data centre”. “A fleet of small nuclear reactors” is to supply the company’s data centres with “secure electricity”. Bill Gates also founded the company Terrapower, which is currently building a nuclear power plant in the state of Wyoming.

But it’s not just a lack of electricity; the development of computing power is also reaching its limits. The computing power used to train AI had already increased 300,000-fold between 2015 and 2021. According to Moore’s Law, the number of computing operations that computers can perform per second doubles approximately every twenty months. The demand for computing operations through machine learning is currently doubling every three to four months.

Water

Water may be an even bigger problem. It is needed to produce the chips and to cool the data centres. “The production of a chip weighing two grams … consumes 35 litres of water. A modern [chip factory needs] up to 45 million litres of water per day, a large part of it ‘ultra pure water’…” (The Summer of Semis, Wildcat 110).

Chip factories and data centres are built wherever governments are stupid enough to give the companies not only billions in subsidies and cheap electricity, but also water practically for free (just like the Tesla factory in the middle of an area that supplies drinking water in Brandenburg). In 2021, Google began building a huge data centre in Uruguay that requires seven million litres of fresh water every day to cool the computers. There was a water crisis in Uruguay in the summer; more than one million people have no access to clean drinking water. [5] Even the Taipei Times, which is otherwise not particularly hostile to technology, warned in mid-September of the “heavy ecological costs of ChatGPT”. Microsoft draws 43.5 million litres of water from the rivers in a hot summer month for a supercomputer in Iowa (10,000 GPUs) on which it trains GPT-4 – which becomes a problem for neighbouring agriculture. According to its own figures, Microsoft’s global water consumption in 2022 was 34 per cent higher than in 2021, while Google reported an increase of 20 per cent. For both, the sharp increase is almost exclusively due to AI. [6]

In Iowa, new data centres are now only permitted if they use water more sparingly. In Saxony and Saxony-Anhalt, the shot has not yet been heard. The chip industry in Saxony needs so much water that the groundwater is no longer sufficient. “No problem,” say the politicians, “we’ll take it from the Elbe.” Now a TSMC chip factory is to be added 200 kilometres downstream, people are starting to worry, after the initial euphoria. According to early official estimates, the Intel plant in Magdeburg will use record amounts of water for production. The state estimates 6.5 million cubic metres of water per year. This means that the Intel plant would consume more than the Tesla plant in Brandenburg. It is not yet clear from which sources the water will be drawn. There are considerations for an Elbe waterworks. By the way, ChatGPT not only consumes water during training, it also swallows half a litre during use if someone asks it five to 50 questions in a session.

Search engines and business models

“Bing is based on AI, so surprises and errors are possible”
(Microsoft, from the homepage of its search engine)

On the 30th April 1993, the World Wide Web was opened to the public free of charge. The Google search engine went online on 15 September 1997. It has shaped the world wide web, and will transform it further. A significant part of the www works according to the formula: sites create content, Google leads people to that content, everyone places adverts. Even large sites get up to 40 per cent of their clicks via the search engine, and the position in the search results has a big influence on how many you get. Websites use search engine optimisation to get as high up as possible. The internet looks the way we know it because Google demands it – right down to specific standards for page design, technology and content. Advertising finances almost the entire Internet. It converts attention (clicks) into money. (This ‘attention economy’ rewards sensationalist headlines and fake news, but that’s another story).

Over the past few years, Google has continued to evolve from a search engine to an answer engine. Certain questions are answered directly instead of displaying a long list of website links. Suitable results are delivered in response to a search query, and the corresponding adverts are displayed. Amazon has also been recommending books and other products based on your previous purchases or browsing history for a long time. Other websites suggest friends, predict flu epidemics, signal changing consumer habits and know your taste in music (YouTube, Facebook, Netflix, Spotify, etc). Millions and millions of people use these services every day. Two thirds of people google what their symptoms could mean before they go to the doctor, and there are thousands of health apps worldwide. And 300 million patient data records have now been fed into Google’s medical AI Med Palm2 in the USA.

In the course of this, Google got worse and worse as a search engine; the search results more irrelevant, the searches more fruitless. Attempts to use machine learning to enable Google to ‘understand’ what people are ‘really’ looking for sometimes have the opposite effect. This is partly due to the algorithm, which interprets results that are frequently clicked on as being more relevant than others (also self-reinforcing!). Many people now only use Google to search their favourite websites by adding /github, /reddit or /wiki.

With the integration of its chatbot Bard into search, Google is once again fundamentally changing the www and its business models. Bard is supposed to read the results and then summarise them. For many, these summaries will be enough. Why keep clicking when you already have all the answers? However, if Google no longer drives the same number of clicks to their websites, this will mean the end for many operators. If companies can no longer finance themselves through advertising, they would have to switch to payment systems and shield their content from AI. This would significantly change the internet as we know it.

The parrot paper

In March 2021, the paper On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? by Timnit Gebru, Margaret Mitchell and four other colleagues from Google’s AI ethics department in cooperation with computational linguist Emily Bender, was published. They began working on their paper in 2020, just as the precursors of ChatGPT were attracting attention for producing texts that appeared error-free and trustworthy at first glance. By the time the paper became public, Timnit Gebru and Margaret Mitchell, the two Google employees who had refused to withdraw their signatures in response to threats from their boss, had already been fired. In the final sections, they do make suggestions for the “mindful development” of AI. Nevertheless, the paper is a frontal attack; it criticises the very thing about chatbots that makes up their business model: they are big (so that no one can keep up); they suck up everything like a black hole: computing power, electricity, water, research funds; and to make them sell, an irrational hype is created and a rational discussion about the possibilities of AI deliberately sidestepped.

The Parrot paper also criticises – “at a time of unprecedented environmental change worldwide” – the huge waste of resources caused by chatbots (electricity, CO₂, water, etc). The majority of the electricity required for AI comes from fossil fuels. Although the tech industry is betting that everything will soon come from renewable energy sources, this is unrealistic and renewable energy is not ‘free’. The Global South is paying for the development of English-language models for high earners with the ecological consequences.

Chatbots have led to a huge misallocation of research funds and scientific resources. Ultimately, they are preventing real linguistic progress and work on real ‘artificial intelligence’.

The language models are racist and anti-minority because they ‘over represent’ mainstream opinion. AI increases bias in a self-reinforcing cycle. In practice, AI reproduces racist and other patterns (black people have been denied legitimate insurance claims, medical services and state social benefits). In the USA, AI systems are involved in the imprisonment of minorities for relatively longer sentences.

The language generators do not understand or produce ‘language’. Language always has form and meaning, but chatbots only have ‘form’. They are only successful in tasks that can be approached by manipulating linguistic forms. However, as they produce grammatically largely error-free, genuine-sounding texts, they exploit people’s tendency to find meaning in language and to interpret sequences of characters as meaningful communicative acts. Due to this potential for manipulation, work on ‘synthetic human behaviour’ is a ‘glaring red line’ in AI development. ‘Synthetic’ can be translated as artificial, but it does not exactly hit the nail on the head. The authors (these sections can probably be traced back to the computational linguist Emily Bender) criticise the approach of using artificial language to imitate human speech in order to deliberately and purposefully confuse users.

Clickworker

Digitalisation first, concerns second”
(The German liberal party FDP‘s slogan for the 2017 federal election)

Machine learning has prerequisites and needs to be improved. Generative AI is based on the work of so-called click workers, who often analyse texts, keyword images, listen to audio recordings and sometimes collect data themselves, for example by taking photos on predefined topics, often under precarious conditions. Without these poorly paid click workers and content moderators in Kenya, Venezuela, Argentina, Bulgaria, etc., ChatGPT would not exist any more than social media did before it. The work of these people often remains hidden because it doesn’t fit in with the corporate narrative that everything will take care of itself with AI. For ChatGPT, for example, three dozen workers in Kenya have created pre-training filters for an hourly wage of between 1.32 and 2 US Dollars. However, they are not paid per hour, but on a piecework system (in Eastern Europe, Latin America and Asia, you get a maximum of one Dollar per processed data record, text passage, etc). [7]

It is difficult to find out how many click workers are working on AIs and even more difficult to estimate the volume of work. Providers such as Applause or Clickworker claim to have several million click workers each, and Clickworker alone around 4.5 million. They do not talk about the labour time that is required to train bots, etc. OpenAI, Google, Microsoft and Amazon don’t say anything about it, and there are no serious independent studies.

Milagros Miceli, who researches the work behind AI systems at the Weizenbaum Institute in Berlin, also only speaks of “millions”: “There are millions of people behind the applications, moderating content and labelling training data. They also help to generate the data in the first place by uploading images and speaking words. There are even employees who pretend to be AI to users.” For example, one such case became known in Madagascar: 35 people lived in a house with only one toilet; they had to constantly monitor cameras and raise the alarm if something happened. A Parisian start-up had previously sold the system to large French supermarkets for a lot of money as “AI-controlled camera surveillance against shoplifting”. In another case, refugees from the Middle East monitored people in hospitals from Bulgaria via camera and had to trigger an alarm if someone fell out of bed or needed help, for example. They had hourly wages of around half a US Dollar. Some also worked directly from Syria.

Miceli estimates that 80 per cent of the costs for an AI go to the computing power required, 20 per cent to the manpower required, of which 90 per cent is likely to go to the engineers in the USA.

“The workers gather a lot of expertise. They are the experts in dealing directly with data because they have to deal with it on a daily basis. Nobody has learnt the trade better, not even the engineers. Some resist the miserable working conditions. It helps the workers most if they organise themselves. Our conversations with them have also shown this.” (Milagros Miceli) [8]

In a petition to the German parliament at the end of June, hundreds of content moderators for online networks such as Facebook and TikTok demanded better working conditions. Previously, employees of the Kenyan Meta subcontractor Sama had sued their employer for illegal dismissals. Meta did not wish to comment on this issue.

Digitalisation is not the same as increased productivity

“The degradation of workers is not caused by systems that are actually capable of replacing them. Rather, they are already having effects when people are led to believe that such systems can replace workers.” (Meredith Whittaker at re:publica 2023)

“From 2035, there will no longer be a job that has nothing to do with artificial intelligence” (Federal Labour Minister Hubertus Heil)

AI helps to circumvent labour laws and prescribed rest periods. In 2015, for example, a new staff scheduling software programme at Starbucks made headlines by scheduling shifts for employees in an extremely chaotic and short-term manner. Based on a database of customer flows in real time, the programme only ever called as many workers into the shift as necessary – and always assigned them less than 30 hours per week so that Starbucks did not have to pay statutory health insurance.

The threat to translators, journalists, actors, authors, etc. is of a different kind. AI-supported journalism would save many jobs; translators who ‘only’ correct a deepL translation would earn much less; etc. It is therefore no wonder that authors and actors in the USA have started the first strikes against AI (see below).

But does AI also help to increase productivity?

An increase in productivity (“increase in the productive power of labour”) occurs when the working time required to produce a commodity is reduced. A smaller amount of labour creates a constant or even larger amount of use value. Social progress would be achieved if this increase in productivity meant that people had to work less and the standard of living (living space, mobility, good food) remained at least the same or even increased (whereby more freely available time in itself increases the quality of life). Historically, this has usually led to longer life expectancy.

This has been true for a large part of humanity over the last 200 years. The average working time for a worker fell from over 3,000 hours per year in 1870 to around 1,500 in 2017. General life expectancy rose from around 30-40 to 70-80 years.

Today, however, it is more important than in the past which income bracket you belong to and where you live, i.e. how much access you have to the ‘productive forces’ in the areas of medicine, infrastructure, etc. Depending on their country and sex, people in the highest income brackets live five to 15 years longer than those in the lowest. Never before has there been such a large difference in wealth and income between the rich and the poor. Consequently, there has never been such a difference in life expectancy between rich and poor. While it is now possible to live healthier and longer, the life expectancy of the poor is falling.

Progress and productive forces

In the 19th and 20th centuries, employers ultimately responded to labour struggles by increasing productivity. A turning point was the introduction of computerised just-in-time production without warehousing (lean production) in response to the ‘crisis of Fordism’. What was propagated in the West as ‘Toyotism’ was not a change in the labour process to increase productivity, as could be seen in the transition from water to steam power or in the development of the assembly line. It was a shift to Asia and subcontracting. The same assembly line factories were built in China as in the West – only the labour costs were much lower.

On a political level, it worked to dismantle the huge factories with tens of thousands of workers and thus break the fighting power of the working class. But since then, rates of productivity growth have been falling and are nowhere near those of the pre-1970s. Car companies have spent the last 20 years making profits not through ingenious production processes, but through financial transactions, price increases, emissions-promoting sales regulations and cheating their suppliers. Growth is based on infrastructure wear and tear, withheld investments and credit expansion.

The ‘smart factory’ is a reaction to falling sales and the implosion of the just-in-time system.

Against a backdrop of stagnation in the tech and automotive industries, the two have joined forces to propagate a new business model. Production facilities and logistics are to ‘organise themselves’ through consistent automation and digitalisation, with goods production from order to delivery functioning without people. Because the smart factory networks ‘everything with everything’. At the Hannover Messe trade fair in April 2023, there was talk of ten million factories worldwide that “are waiting to be digitalised”; the market for smart factory components is already worth 86 billion Dollars a year. For automation companies such as Siemens, SAP, ABB, General Electric, etc., ‘digitalisation’ is indeed the big revenue driver – the market for AI in particular accounted for almost 400 billion euros in 2021 (the total turnover of the automotive industry is just under two trillion euros). Now they are starting to introduce AI applications at factory level.

However, compared to the pre-coronavirus phase, many employers have toned down their fantasies about how many workers in factories could be replaced by AI. They tend to promote the smart factory as a means of saving resources, improving the eco-balance, parts quality and monitoring supply chains. Mass sales can no longer be expanded – which is why many are switching to luxury production. The profitable production of small batch sizes is crucial – hence the talk of ‘individual customer requirements’ and ‘batch size 1’.

Mercedes equips machines and parts with chips that collect all kinds of data for the cloud. The AI is supposed to derive useful measures from this, and the 800 colleagues per shift in ‘Factory 56’, the digitalised [model factory] in Sindelfingen, are then supposed to ‘implement’ this. Management calls this data ‘democratisation’. Mercedes is working with Siemens and Microsoft to achieve this, with Microsoft providing AI and the cloud.

At BMW, 15,000 employees work in Omniverse, a real-time graphic collaboration platform that virtually maps a factory. The aim is to get plants up and running faster and more smoothly and to optimise them continuously. This would reduce development and maintenance costs. Computers with the RTX graphics card from Nvidia, which costs several hundred euros, are required. Nvidia is also the provider and owner of the Omniverse, in which 700 companies are currently working.

Further development of monitoring

BMW – like other companies – has been working on AI personnel planning since 2022. BMW is currently negotiating with the trade union IG Metall. Jens Rauschenbach, ‘Head of Standards/Methods of Value Added Production System and Industrial Engineering’ at BMW, sees the opportunity to finally monitor all factories live and centrally and compare them with each other: “Until now, it was almost impossible to compare personnel scheduling between two plants, but in future we will have access to standardised data for all functional levels, which will be available at the touch of a button. [9] In the immediate sphere of production, AI is supposed to find optimisation potential by, for example, looking for correlations between rework, rejects, frequent cycle changes and tool changes. (A lot of things that workers themselves know, but do not disclose even if they are offered a bonus through ‘continuous improvement’ schemes). The data collected can help a more accurate production process reduce wear and tear, but at the other end, the massive accumulation of data creates new costs. From their offices and meeting rooms, managers imagine using computers and sensors to track workers’ movements and turn the breathing spaces of ‘free time’ they create into productive labour time. But they may soon find out that digitalisation and increased productivity are two different things. At the end of August, Toyota ran out of memory during server maintenance and all 14 Japanese plants were shut down for a day. At the end of September, a faulty computing process on a server at VW’s main plant in Wolfsburg multiplied to such an extent that almost the entire global production network went down. [10]

The first strike against AI

Since the beginning of May 2023, more than 11,000 screenwriters organised in the Writers Guild of America have been on strike in the USA. In mid-July, the actors also went on strike. Their union, the Screen Actors Guild, has around 160,000 members. According to union president Fran Drescher, 86 per cent of them earn less than 26,000 Dollars a year. Both unions are demanding higher wages and regulation of the use of AI.

After almost five months of strike action, the typists’ union reported success. In addition to higher allowances for retirement and health care and wage increases of five per cent this year, four per cent next year and 3.5 per cent in 2025, it was agreed that an AI may not replace a human writer or take their pay. The studios themselves are not allowed to use AI to write scripts or develop ideas, but the authors can use AI. In future, streaming services will have to disclose to authors how often their series are watched and pay them accordingly. On the 9th of October, union members approved the collective agreement, which runs until May 2026.

The strike by the actors’ union continues. Negotiations were held for the first time since the strike began at the beginning of October, but they failed and were interrupted. The future of the industry is being negotiated here even more than with the authors: Generative AI that builds scenes and sequences for films is the wet dream of second-tier producers and directors who supply the entertainment industry’s daily bread of TV series and B-movies. The boom in the streaming industries has put even more pressure on these producers of run-of-the-mill picture goods. Eliminating the insecurities and costs associated with the creative proletariat of actors is supposed to save the arse of their business model. There have already been some attempts with digital tools. Murdered rapper Tupac made another virtual appearance via hologram at the Coachella festival in 2012. Scorsese had the main actors in his film The Irishman digitally rejuvenated, etc. However, these were digital methods for individual details or shots in films that do not make the work of actors superfluous. With the use of generative AIs, however, they would become largely obsolete. Generative AIs could create any number of sequences using recordings of them in various situations and emotional states. [11]

The US actors’ union SAG-AFTRA does not want to completely rule out the use of AIs in the production of films. However, their demand for appropriate compensation for actors has already met with firm resistance from the Alliance of Motion Picture and Television Producers. The union decided to target both the production and marketing of all TV, film and streaming productions with their strike. This was binding for all union members. The union is demanding mandatory approval and appropriate remuneration when AIs are used to change scenes or generate new scenes. The companies only want to pay half a working day for recordings, which can then be used to generate further scenes at will – without the consent of the actors and without further payment. According to the bosses‘ will it should also be possible to use the recordings made for training AIs without consent.

The production power of screenwriters and actors was enough to bring the industry to a screeching halt. But this shows the corporations all the more why they are relying on generative AI – it’s about breaking this power. The outcome of the strike was still unclear at the time of going to press with this issue of Wildcat.

At the end of July, writers in the USA also made their voices heard. In an open letter from the Authors Guild, more than 9,000 of them protested against the free use of their works in the development of AI. “Our writings are food for a machine that eats incessantly without paying for it,” it says. The president of the writers’ union, Maya Shanbhag Lang, said in an interview that the results of AI will always be “derivative” and that the technology can only “regurgitate” what has been fed to it by humans. The petition also points out that the average income of professional authors has already fallen by up to forty per cent over the past decade. This coincided with the news that the largest book company in the US, Penguin Random House, is laying off more people. The New York Times quoted a letter from the CEO to his employees: “I’m sad to share the news that yesterday some of our colleagues across the company were informed that their roles will be eliminated.” That sounds like an AI horror. Dietmar Dath pointed out in an FAZ article: “It’s not AI that’s the problem, but the mindset of Bill Gates, for example, who recently said at a Goldman Sachs event that personal AI assistants would soon make Amazon redundant because they could ‘read the stuff you don’t have time to read’. The fact that Gates obviously doesn’t know what reading and writing are good for, apart from making money, could have been seen from the functional design of his company Microsoft’s products.” [12]

“Do not expect to be downloaded in an android body-form any time soon.”

(This was the advice of the Fugs, in their song Advice from the Fugs back in 2003).

Very often, new technologies in capitalism were brought into the world with fantasies of redemption. Werner von Siemens wanted to raise people to a “higher level of existence” in the “scientific age”. Emil Rathenau believed that electricity would enable a “civilisation that made people happy”. Henry Ford promised that his “motor car” would bring “paradise on earth”, and James Martin, the computer pioneer of the 1960s, had a vision of a wired society in which computers would create more democracy, more leisure time, more security, more clean air and more peace.

Internet companies sell us expropriation (data theft) and manipulation (romance scammers) as progress. Complex problems can be solved by reducing them to technical problems, and it is “society’s task” to “better adapt” to the new technologies, as Musk and Co. stated in their open letter in March.

Umberto Eco saw the cult of technology as a hallmark of fascism. And indeed, Henry Ford was not only crazy, but also a great admirer of National Socialism. The state of mind and political orientation of today’s Silicon Valley celebrities is very similar. When it comes to their ‘visions’, it is often difficult to separate marketing from megalomania. Do they possibly believe in it themselves? With AI, “we can make the world and people’s lives wonderful. We can cure diseases and increase material prosperity. We can help people live happier and more fulfilling lives,” said the head of OpenAI Sam Altman – in the same month that he warned of the extinction of humanity through AI!

Will AI bring salvation, or the extinction of humanity? Tremendous happiness, or the end of the world? This stark juxtaposition emotionalises and narrows the debate. There is no more room for criticism and questions – including the extent to which large parts of the hype are instigated for publicity reasons, and the promises are just hot air and staged deception. This strategy is typical of ‘long-termism’.

Twitter is now called X

‘Long-termism’ sees humanity’s primary moral obligation as securing the conditions for the well-being of trillions and trillions of sentient beings in the distant future. To do this, however, humanity must survive, which is why all moral questions are reduced to ‘existential risk’ (the so-called ‘xrisk’).

A central thought experiment of Long-termism is the consideration that peace and a nuclear war that kills 99 per cent of the world’s population could have more in common in the long term than this nuclear war with something that wipes out the entire human race. Such predictions are based on assumptions such as interstellar space colonisation, mind uploading and the digital replicability of consciousness – although these are likely to take a few more weeks (see Human Brain Project)! However, they disregard current problems, such as the consequences of global heating or social inequality, as “morally negligible”. After all, compared to the trillions and trillions of happy beings in the distant future, the few billion people of today are just a rounding error, and their problems are negligible.

The distinguishing feature of Long-termism is the x (xrisk). Musk attested to Long-termism having a “close alignment with my own philosophy”. He named a son X Æ A-12, his space tourism company “Space X”. Twitter is now called X – and is “a logical next step towards superintelligence”, in the words of digital marketing expert Helén Orgis, on LinkedIn. This is because X provides an enormous amount of data (communication, financial movements, purchasing behaviour). In turn, Musk can use this to feed his AI company X.AI and his brain implant company Neuralink. In general, Musk is conducting ‘research’ in all the fields specified by long-termism.

The shell of Long-termism is “Effective Altruism” (there is no morality; good is what benefits the most people; make as much money as possible in order to do as much good as possible). This sees itself as a further development of Ayn Rand’s ‘Objectivism’. In addition to Altman and Musk, other prominent supporters include Peter Thiel, representatives of the crypto scene (for example Sam Bankman-Fried) and the founder of the website Our World in Data. The UN report Our Common Agenda, published in 2021, is said to have adopted key concepts and approaches of long-termism.

An entrepreneurial philosophy straight out of a book

Timnit Gebru also wrote in November 2022 that in her two decades in Silicon Valley, she had witnessed how “the Effective Altruism movement has gained a disturbing level of influence” and is increasingly dominating AI research. Thiel and Musk, for example, were speakers at the Effective Altruist conferences in 2013 and 2015 respectively. [13]

Like Long-termism, Effective Altruism also works with apocalyptic threats. The biggest threat is that a general artificial intelligence will wipe out humanity. The only way to prevent this would be to create a good AI as quickly as possible. To this end, Elon Musk and Peter Thiel founded the company OpenAI in 2015 to “ensure that artificial intelligence benefits all of humanity”, as their website states. Unfortunately, things turned out differently: OpenAI released ChatGPT at the end of 2022. Four years earlier, Musk had withdrawn from the company because he was unable to take sole control; Microsoft has been the main shareholder since 2019. Musk and Thiel have invested heavily in similar companies to develop “good AI”, such as DeepMind and MIRI.

Musk likes to play with codes of the Qanon movement. Thiel is also a fascist technocrat who wants to hand over power to a “superior ruling class”; a “single individual” could change people’s fate for the better, he wrote. In 2009, he declared that freedom and democracy were not “compatible”. During the 2016 election campaign, he publicly sided with Trump and donated millions to his campaign. Since then, he has financed ultra-right-wing Republicans. His company Palantir is intertwined with secret services and the military industry.

Only an elite can save humanity; selfishness, ingenuity and efficiency are the highest virtues; self-interested big industrialists are the “engine of the world”; stopping this engine leads to the end of civilisation; therefore all state intervention is immoral. Ayn Rand turned this kind of stuff into bestselling literature in the 1950s. In the USA, she is still one of the most influential and most widely read political authors. Her booklet Atlas Shrugged from 1957 has been repeatedly translated into German under new titles, most recently in 2021 as Der freie Mensch. Rand saw Kant, the philosopher of the Enlightenment, as “the most evil man in the history of mankind”. Alan Greenspan, the former Chairman of the Federal Reserve, was a close friend of Rand and adopted her political-economic ideas. So did the Tea Party movement. The Ayn Rand Institute, which propagates Effective Altruism, also played an important role in the protests against Barack Obama’s healthcare reform.

“We must free ourselves from the alternative that [Effective Altruism] sells us: either to be subjugated by an AI or to be saved by an increasingly elusive techno-utopia promised to us by the Silicon Valley elites.” (Gebru, ibid.)

Technology is not a driver of social progress

A data leak at Tesla revealed in June that its autopilot had already caused more than 1,000 accidents. It was technically the same as VW’s solution, which was already several years old, with one crucial difference: it did not switch itself off as a precaution. The nonchalance with which Musk and his companies ignore technical deficits and make further untenable promises is not a “quirk of an entrepreneurial genius” – it is his business principle. 

Using the startup method of the Minimal Viable Product (launching a barely viable product onto the market), he shoots rockets into space, puts cars on the road and unleashes dangerous software on us. Ultimately, he doesn’t care about human lives if they get in the way of his long-term mission.

The tech billionaires have become rich with big data, share deals and company sales. They are anything but progressive. They use their wealth to finance reactionary forces (Thiel). They are pushing the development of AI in the direction that suits them (black box, excluding responsibility, exploiting social needs as with the program Replica). AI is indeed accelerating some social developments – but not in a progressive direction.

If, for example, ‘learning’ no longer means understanding something, but rather passing a test via multiple choice, sooner or later the hour of generative AI will strike.

If think tanks can replace political debate and manipulate governments, as we described in the editorial, then for big business, AI is “an even more ingenious instrument of indirect political-economic trickery … than the popular ‘foundations’.” (Dietmar Dath, Finde ein Kürzel…)

Ford’s ‘motor car’ led, among other things, to the dismantling of functioning public transport systems (electric buses, rail networks, pneumatic tube systems). But the car was a very successful business model for more than a century and a half. This is not yet the case with generative AI. ChatGPT costs around 700,000 Dollars a day to operate. It is too expensive by a factor of 90 to finance it through advertising like the Google search engine. Twitter’s monthly US advertising revenue has halved since Musk’s takeover in October 2022. So we need subscription models. Microsoft charges 10 Dollars a month for the co-pilot on its developer platform Github – and apparently makes a loss of 20 Dollars per user, or even 80 Dollars a month for Power Users. With a few million developers, Microsoft can certainly cope with the deficit. But it cannot make its AI helpers available to hundreds of millions of users of its Office programmes or as part of its smartphone operating systems. For business customers, Microsoft and Google have now agreed on 30 Dollars per user – most private users will certainly not pay that for additional AI tools. The big five tech companies are still expanding their power by cross-subsidising AI. But at some point, profits will have to be generated. Otherwise Dietmar Dath is right and AI was the abbreviation for “awaiting insolvency”.

Customers decide on the success of the business (nobody has to stay with X). The people in Brandenburg, Magdeburg and Saxony also decide how the story continues: will they continue to have their water taken away? And Musk’s workers, who are only paid irregularly and have a high risk of accidents – will they suck it up forever? For a year now, there have been sickness rates of up to 30 per cent in the car plant in Grünheide. Yesterday (on the 9th of October), the first major action was taken against extreme workloads, excessive production targets and a lack of occupational safety.

Footnotes

[1] M. Park, E. Leahey, R.J. Funk: Papers and patents are becoming less disruptive over time. Nature 613, 138-144 (2023).
Interesting commentary on this by Florian Rötzer on telepolis: “Fewer breakthroughs, sluggish progress”.

[2] “The cooking ties smell, flavor and language together in a way seldom recognised: the smell and flavors of cooking were likely a prime factor in the development of language.” Gordon M. Shepherd: Neurogastronomy, How the Brain Creates Flavour and Why It Matters

[3] In 2013, the EU provided brain researcher Henry Markram with 600 million euros to set up the Human Brain Project, the largest brain research project in the world. Markram had promised to simulate the entire human brain one-to-one in a computer model and develop therapies for everything from Alzheimer’s to schizophrenia. It ended in October after ten years. It was not even close to being able to recreate the human brain. Neither schizophrenia nor Alzheimer’s have been defeated. Neuroscience has no clear theory at all; there is not even agreement on central concepts such as memory, cognition or even consciousness. It plugs this hole with computer metaphors. This generates research funding – but does not advance science.

[4] Brian Bailey: AI Power Consumption Exploding. 15 AUG. 2022, https://semiengineering.com

[5] The head of the Saxon State Chancellery, Oliver Schenk (CDU), who is responsible for the billions of Euro in subsidies to the chip industry, celebrated the announced investment by TSMC. “TSMC is one of the most important companies in the world. … These companies tie their investment decisions to three conditions: Public funding is essential in competition with other countries, sufficient staff must be available and the water supply must be secured.” German magazine ZEIT on the 4th of October 2023 “Water shortage in Saxony: just tap into the Elbe”

[6] Matt O’Brian, Hannah Fingerhut: AI technology behind ChatGPT carries hefty costs. Taipei Times, 14th of September 2023

[7] Billy Perrigo: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. time.com, 18th of January 2023

[8] Interview with Milagros Miceli: How millions of people work for AI. https://netzpolitik.org, 17/03/2023

[9] With AI to the optimal shift plan, AutomotiveIT, 17th of October 2022

[10] To read more: Industry 4.0 inWildcat104 – Sabine Pfeiffer: Digitalisation as a distributive force, Adrian Mengay: Production system criticism

[11] The union’s strike resolution is available online at: https://www.sagaftrastrike.org/post/sag-aftra-strike-order-for-tv-theatrical-streaming-contracts – The Alliance of Motion Picture and Television Producers includes Amazon/MGM, Apple, Disney/ABC/Fox, NBCUniversal, Netflix, Paramount/CBS, Sony, Warner Bros. Discovery/HBO and others.

[12] Dietmar Dath: FAZ 1st of June 2023

[13] Timnit Gebru: “Effective Altruism Is Pushing a Dangerous Brand of ‘AI Safety’” in Wired, 30 November 2022.

Tags: