From Humans Managing Machines To Machines Managing Humans

When the New York Times sued OpenAi in December of 2023, alleging the AI giant used large amounts of NYT content to train its model, without supplying any compensation, it became clear that what is emerging now is something of a land grab for data.

In the case of the NYT vs OpenAI, what’s in dispute is the ownership rights of newspaper articles. What it does not address is the far larger amounts of data freely generated by individual users that AI builders use to train models. Whose data is this, and who gets compensated for its use?

It typically takes massive amounts of data to train LLMs, and there aren’t that many sources of publicly available data. So the sources of data mining have become a contested area, with different parties making competing claims about their right to use that data. This land grab might remind us of another land grab, one that presaged the rise of modern agribusiness, followed closely by the Industrial Revolution: the English Enclosure of the Commons.

For the unfamiliar, the Enclosures comprised hundreds of acts of Parliament, from the late sixteenth to the early eighteenth centuries, legislating the fencing off of any “unused” or “waste” land. It made it illegal for peasants to remain living or farming on the unrented land (even if their families had for centuries), to access forests for fuel or hunting, or indeed to live in any way other than working for a wage. Untold numbers of poor peasants were forcibly displaced from lands they’d used for simple subsistence farming and livestock grazing that had been held in common for generations.

The resulting human carnage following the enclosures has been well documented over the years, and anyone interested will find hundreds of books, articles, and even a few films on the subject (I reference many below as a start). The vast majority of agriculturally-based poor people in rural England lost access to their only means of subsistence, thus forcing them to rent land to grow food for themselves and their families, and to work on someone else’s farm for a wage in order to pay that rent.

It also led to rapid migrations of labor from the countryside to the cities in search of work, across the oceans to America or Australia (both free and indentured versions), or in a good number of cases, to the poorhouse or even the gallows (for hundreds of petty food-related crimes that were put on the books around the same time).

All this was justified at the time by an emerging middle class of intellectuals, entrepreneurs, and large landowners, many of whom occupied seats in Parliament and so could pass laws that served their interests (the House of Commons included no actual commoners). Chief among the most vocal supporters of enclosure was Adam Smith, whose Wealth of Nations is considered the founding text of modern economics. His scholarship, by the way, rather than based on actual field research and scientific validation, relied on conversations with wealthy merchants in Glasgow and Edinburgh whom he had befriended during his time as a lecturer.

Smith asserted, as did others who followed him lik Ricardo and Malthus, that any unused land left in common hands (rather than being held in private hands) was doomed to remain fallow, unproductive, and essentially a drain on the commonwealth. It was only through private ownership, they said, that assets like land could be “improved” and therefore grow in value. This idea that resources that are not held in private hands necessarily go to waste was most successfully popularized by the 1960s paper, Tragedy of the Commons by Garrett Hardin. Elites at the time embraced Hardin’s work, despite serious problems with both its lack of evidence and shoddy research methodology.

Today, Hardin’s Tragedy has been thoroughly discredited by numerous critics, many citing detailed research of pre-Enclosure agricultural and pastoral communities in England. What that research clearly shows is that, prior to the enclosures, far from an increase in tragedies, common folk stewarding open lands under the late feudal systems in England frequently worked collaboratively to manage land usage equitably and protect it from over-use, as they had for centuries.

Many of the innovations claimed by early landlords to be “improving” the land, such as 4-field rotation or the use of cover crops with grazing animals, were already widely in use by peasant farmers for generations. Further, claims that measures of agricultural productivity supposedly increased after enclosure, leading to the boom in population associated with the Industrial Revolution, are often found to lack any hard evidence, and are thus now subject to vigorous debate among economic historians.

This calls into question the knee-jerk reaction of some in our culture to claim that rights held in common, rather than privately, automatically lead to a tragic results.

So what about our data held in common? Each one of us produces vast quantities of textual, pictographic, and video data every year. These data are being consumed by private companies to train AI models. In many cases, those AI models are then sold back to us as a service. Our digital commons are thus being enclosed before our eyes. What’s more, these AI models are starting to replace knowledge workers which will drive down the bargaining power of those who do knowledge work (produce data) for a living.

Someone recently joked on social media that the robots were supposed to do the dishes and the laundry for us, while we enjoy writing and making art. But unfortunately, we ended up with the opposite situation. Too true.

I published an article for GE in 2017 (I made a copy of it on LinkedIn) claiming that soon, the only tech jobs will be in design or data, as the application development layers in the middle are rapidly being automated (I now think that design might also be on the chopping block).

That article received so much pushback at the time from incensed engineers who claimed that I was over-simplifying their work. I am not merely speculating about this shift from a place of relative technical ignorance or naiveté. I have built many complex software systems, and I can readily see what LLMs can already do now, and how fast they have improved in just the last 12 months. Something big is shifting, and putting one’s head in the sand is not going to slow it down.

I recently watched a demo of an AI software developer (not Devin, but that doesn’t matter. Soon there will be many). It was quite impressive, and seeing the AI agents piece together an application before my eyes was exciting and a little dizzying.

I felt a certain thrill watching just how quickly this machine was able to step through the many application building phases that felt so familiar to me, from generating a user story specification from one sentence of input, through to generating deployable code (including the automated tests).

There was something deeply satisfying in watching it work. But then it gave me pause.

“Well, that’s it for software engineers,” I remember thinking immediately afterwards, as I washed the dishes between loads of laundry. Perhaps not literally, but the day is coming when that job as a mass career category is going to shrink and then quite possibly disappear entirely. And that day is closer than we think.

This is not intended as a “doom and gloom” piece on the AI “apocalypse” (nevermind that the actual meaning of the ancient greek is lifting the veil, not destruction or ending). I am a pathological optimist for the human race in the long run, always have been.

But it is important for those of us who work at the bleeding edge of technology, who are excited to hasten these world-changing trends, to take regular stock of the social and human impacts of our work, and decide if maybe certain course corrections might be in order.

It bears remembering that from at least one perspective—that of the gods—Prometheus earned his punishment fair and square.


Some see the rise of AI as similar to previous eras of technological disruption and downplay the risk of major job losses. They can simultaneously extoll the world-shaking virtues of the technology, while waving away any popular concerns about the impact of that technology on both the economy and the social fabric. The comparison with previous times of techno-disruption in industry warrants closer examination.

When the manufacturing industries in the US, starting with the automotive, began to be gradually roboticized in the mid-20th century, the US economy was in a very different place than it is today. For one thing, workers benefitted from the support of a still very powerful labor movement that was able to bargain with employers for job training services and re-skilling programs, and to intervene in some of the worst economic impacts on families.

The political climate of the mid-20th century was also still firmly rooted in New Deal and Keynesian types of thinking, attitudes that supported government stewardship of the economy and a focus on strengthening social safety nets (see Pres. Johnson’s Great Society of the 1960s).

Still further, the wave of automation during the 20th century was gradual, occurring over a period of decades, rather than years. This gave the workforce time to adapt. Sure, many individual older workers never re-entered the workforce, but the next generation of workers assumed more highly skilled technical jobs. That will not be as easy this time, as AI races forward with increasing speed and intensity each year.


As an undergraduate economics student in the mid 90s, I remember an already significant alarm raised by economists about the increase of so-called contingent work, the replacement of long-term employment typical of the middle class in the first decades of the post-war period with short term temporary work. Temp agency giant Manpower was often cited at the time as one of the largest employers in the country, signaling a significant shift away from full-time salaried work with benefits. That trend towards temp work did not reverse. Nearly two decades later, researchers again issue concern about the rise of gig workers as a replacement for even more full-time employees.

Despite all of the marketing spend from ride share companies promoting the idealized picture that ride share drivers are savvy independent small businesses in their own right, their livelihoods depend ultimately on the decisions dictated by an algorithm. When you talk to ride share workers, one common frame is immediately noticeable: The app is the boss.

Here’s another example. I recently ordered something from Amazon. The delivery at my house is complicated by the weird layout of the property. Most postal workers and local delivery people have adapted to the strange layout which can easily be overcome using human eyeballs and intuition. But the Amazon drivers kept leaving packages in a weird place that is hard to get to from inside the fence. Finally, I confronted one of the drivers about this (in a friendly way), and requested that they bring the package around the way every other driver has figured out.

Saddened, and a bit embarrassed, the driver informed me that the device they carried with them provides exact point-to-point routing instructions. And no matter how obvious it might be to a human standing on the property, they cannot deviate from the route dictated or risk losing their job. The only option was for me to provide feedback through my app with the proper coordinates and instructions for delivery.


The recent Boeing aircraft scandal is not so much simply that they “didn’t listen to the engineers anymore”, although there is strong evidence to suggest that this failure had a negative impact on safety. But the real issue is that the motivation of the leadership was entirely driven by Wall St over and above issues of quality and safety for their customers.

Indeed, the tighter connection between Wall Street and Silicon Valley has driven these same outcomes in the digital world. Google, Apple, Amazon, and others are all being sued for anti-trust, anti-competitive, and anti-worker violations. At the same time, they feverishly aggregate the data we all produce so they can train AI models that will ultimately come to replace most knowledge workers who produced that training data in the first place, a bitter irony.

Shouldn’t we as technology leaders push for increased productivity through investment in automation?

Isn’t automation synonymous with some form of “continuous improvement”?

Isn’t AI simply automation of cognition?

As in many aspects of life, the answer is more complicated than a simple yes or no. It is instructive to look at the attitudes of our forbearers with regard to automation, heroes of quality engineering and continuous improvement like Taiichi Ohno, W. Edwards Deming, and Eli Goldratt.

If you haven’t read Ohno’s biographical account of developing the venerated Toyota Production System, I urge you to do so. In it, Ohno specifically calls out Toyota’s embrace of autonomation which he defines as “automation, with a human touch”. This nuance reflects the deep importance for Toyota of the principle of Respect for People that hundreds of copycat corporations have glossed over and thereby failed to realize the benefits of The Toyota Way.

Similarly, Deming elevated human intervention even over the statistical process controls he himself championed for decades. And Goldratt’s treatment of robots in his fictional business novel, The Goal, vividly illustrated the impact of spurious investment in automation without considering flow and throughput.


So, what lessons can we draw from history so that we can guide and shape our collective future in such a way that we generate the maximum benefit for the widest number of people?

There were many individual heroes and villains in the story of the English Enclosure, as common folk fought back and struggled to retain their livelihoods. But Enclosure was not perpetrated by any single individual or even a set of individuals. It was instead a systemic shift that emerged from the many small business decisions that became increasingly common in the aftermath of the seizure of monastic lands by Henry VIII. Once those economic forces producing agriculture products for foreign and domestic markets had been unleashed, they became impossible to contain. The same will likely be true of this new wave of AI technologies.

For better or worse, the Enclosure led to the birth of the modern world, catalyzing the Industrial Revolution, and influencing both the American and French Revolutions, thus establishing the modern liberal democratic nation-state. Further, concentrations of workers in factories in the cities of Europe and North America led to mass organization and social movements that eventually won concessions in labor law and environmental protection that we all benefit from today, from workers comp and unemployment insurance to Yosemite and the Clean Air Act.

If we simply leave all the decision-making on AI development solely in the hands of those with financial empires at their disposal, we are at risk of repeating the long and bloody history of the Enclosure. The future of humanity is too important to leave to individuals like Musk, Bezos, and Zuckerberg alone, or even the so-called wisdom of the market. The public benefits ultimately won by working people in the aftermath of the Enclosure were achieved by collective action, and it will take collective action again to mitigate the sweeping changes we’ll experience over the coming decades.

More and more, the leaders we talk to about building great organizations capable of solving the world’s multiple major crises are motivated by a deep belief in the power of collective action working toward a common positive vision of the future. At Startup Patterns, we are committed to helping these leaders navigate the challenges of today’s technology and market environment without sacrificing that future to short term gains or quarterly earnings calls.

We invite you to join us in making that vision of the future a reality. Another world is not just possible, it is inevitable.

Previous
Previous

Mindfulness Is Good Business

Next
Next

The Mindful Product Company