From physical toil to cognitive burnout: modern labor and the ethics of software design
May 30, 2025
Common Intelligence Runtime project explores the future of software engineering in the post-AI era, blending philosophical inquiry with technical design to develop a shared runtime for intelligence, biological and artificial.
- From physical toil to cognitive burnout: modern labor and the ethics of software design
- Internal states, external worlds: concurrency, consensus, and sociotechnical challenges in computing
- Engineering speak: prolegomena to ideal technical discourses
- Cognitive mirror: how LLMs compress language and challenge our sense of self
In traditional Marxist analysis, society is cleaved into classes with opposing interests—notably laborers versus capital owners—often resulting in conflict. Software development, however, offers a modern twist on this narrative: engineers (producers) and users (consumers) can be seen as partners co-invested in a common cause. That cause is the elimination of unnecessary human toil and pain through automation. In essence, both the creators of software and the beneficiaries of software share a stake in using technology to reduce drudgery, inefficiency, and “blood, sweat, and tears” in work. Rather than a zero-sum struggle, their relationship can be framed as a positive-sum collaboration aimed at reducing collective human suffering.
Marx envisioned that the advancement of technology under capitalism, while initially used for profit, also reduces the necessary labor time needed for tasks—a change that could ultimately liberate humans from drudgery. This moral vision aligns with a utilitarian ethos: the right use of technology is that which “promote[s] well-being and reduce[s] suffering” for the greatest number of people. In the realm of software, these ideas translate into a guiding principle: if a piece of code can spare people (whether end-users or developers) from tedious, painful labor, then deploying it is not just economically sound but ethically desirable. In the ideal case, technology should reduce total suffering rather than merely shifting it from one group to another.
Marx’s critique of capitalism centered on how human labor is exploited and how workers and capitalists have inherently conflicting interests. Yet Marx also saw a revolutionary potential in automation. In his Grundrisse (notably the “Fragment on Machines”), Marx observed that capitalism’s drive for efficiency forces it to “progressively replace labour—animal and human, physical and cognitive—with machines.” This tendency, he noted, unintentionally “reduces human labour…to a minimum”, which “will redound to the benefit of emancipated labour, and is the condition of its emancipation.” In other words, the very progress that allows capitalists to produce more with less labor time could free workers from necessary toil in a future post-capitalist society. Marx deemed this reduction of necessary work to be a “force of potential liberation”, even if under capitalism it also produced new contradictions.
The “post-work” political theory advanced by modern thinkers (sometimes termed fully automated luxury communism or left futurism). They envision a world where advanced automation allows work hours to be drastically reduced and material needs are met with minimal human labor. Marx himself could be viewed as an early proponent of this idea; as one commentator wryly put it, “Marx was a fully automated luxury communist” in that he foresaw an “opulent post-work society” enabled by technology. The post-work ethic holds that technology should be deployed to abolish unnecessary suffering and toil, not to entrench existing power imbalances. When producers and consumers unite around that goal, software development becomes a vehicle for social progress: it isn’t just about new apps or profits, but about optimizing human freedom. In summary, the Marxist moral framework—fortified by utilitarian concern for well-being—suggests that the highest purpose of technology is to decrease the total amount of unpleasant labor humans must endure.
Post-work labor relations
In the context of software, who are the “producers” and “consumers”? Producers include the engineers and firms that design, develop, and deploy software systems. Consumers include the users or customers who utilize those systems to accomplish tasks or solve problems. Under a simplistic market view, one might think producers and consumers have divergent interests: producers seek profit or productivity, while consumers seek low cost and high utility. However, when we frame the objective as eliminating toil and pain, a remarkable alignment emerges. Producers only succeed if their software truly eases the pain points of users; and users benefit most when producers are empowered to create effective, labor-saving solutions. In other words, both sides gain from the same outcome: the replacement of a painful status quo with a smoother, automated alternative.
This alignment can be illustrated by the importance of solving “pain points” in product design. In user experience (UX) terms, “solving [user] pain points…creates business value while enhancing a product’s usability and desirability.” A product that substantially reduces a user’s effort or frustration is more likely to be adopted and paid for, directly benefiting the producing firm. A mundane example is a rideshare app: it solved the user pain of hailing a taxi on the street (which could be time-consuming and uncertain) by providing an automated dispatch and payment system. Users got convenience; the company (as producer) profited by taking a cut of fares; and engineers took pride (presumably) in building a system that eliminated an old friction. Rather than being at odds, the interests of the user and the provider were unified by the goal of removing an inefficient manual process.
What was once a political and collective struggle for liberation from labor—the core of the Marxist project—has gradually been reinterpreted through the lens of product design and usability. The aim of abolishing toil and alienation has been reframed, not as a transformation of the social order, but as a series of incremental improvements in user experience. In this shift, the revolutionary figure of the past has given way to the modern UX designer or product manager, whose role is not to challenge the conditions of labor, but to reduce friction within them.
Where emancipation was once imagined as freedom from imposed necessity, it is now pursued as convenience within necessity. Software is built not to abolish work, but to streamline it. The language of “pain points” typifies this ideological softening. Rather than confronting structural exploitation, we now locate discomfort in micro-interactions: too many clicks, a slow loading page, a confusing form field. The solution is not collective resistance but a redesign sprint.
To be clear, there is real value in making systems more humane and responsive. When designed thoughtfully, software can reduce frustration, restore agency, and even open access for those previously excluded. But this kind of problem-solving operates within a narrow frame. It addresses symptoms, not causes. It treats suffering as a usability defect rather than a structural condition. In doing so, it risks displacing the political with the procedural.
The deeper issue is not that pain points are being addressed, but that they have become the only acceptable site of intervention. The pursuit of liberation is no longer revolutionary—it is managerial. The design process has absorbed the emancipatory impulse and repurposed it into metrics: reducing drop-off, increasing engagement, smoothing workflows. What once aimed to eliminate drudgery altogether is now content to optimize it.
This shift marks a quiet transformation in the political imagination of our time. The future is no longer envisioned as a break from the past, but as a marginal improvement on the present. A/B testing has replaced dialectics. User onboarding has supplanted organizing. Time-on-task is the new measure of success. And in this world, the dream of collective autonomy is reinterpreted as personalized settings and seamless interfaces.
There is nothing inherently wrong with improving user experience. But we must be honest about what it displaces. When the language of liberation is domesticated into product language—when freedom becomes ease, and struggle becomes friction—we lose sight of a more radical horizon. Real freedom does not merely mean reducing clicks. It means being able to choose whether to participate at all.
We can generalize this: whenever a manual workflow or human labor process is rife with friction, cost, or “toil,” there is an opportunity for producers and consumers to collaborate (even if implicitly via market forces) on an automated solution. It is often said in software engineering that good developers are “lazy” in a productive way—they hate doing repetitive tasks and will spend effort to automate them rather than do them over and over. This adage (sometimes attributed to Bill Gates: “I will always choose a lazy person to do a difficult job, because a lazy person will find an easy way to do it.”) reflects a truth: engineers (producers) feel the pain of inefficient processes, too, and have a natural incentive to eliminate tedious work by writing code. When they do so successfully, the end-users of that code benefit by having their tasks simplified.
In a sense, software collapses the producer-consumer divide: a well-designed tool essentially transfers the burden of work from the human user to the machine. The user’s role becomes easier (less manual effort), and the machine—which is a crystallization of the producer’s labor—shoulders that burden. For example, consider how search engines replaced the labor of manually sifting through library catalogues or archives. Users now retrieve information in seconds (massively reducing their toil), while engineers and companies invested resources to build and maintain the search software. The cost to the user dropped; the value to the producers came from monetizing the service or deriving data. Crucially, both sides benefited: one got convenience, the other got a viable business model—all by removing a labor-intensive activity from humanity’s collective to-do list. In a classical Marxist sense, one might say the use-value of software lies in its ability to save human labor, and here the use-value directly aligns with exchange-value (producers make money by saving users time).
Even within organizations, the distinction between producer and consumer can blur when it comes to reducing toil. Modern devops practices encourage treating internal developer teams as customers of each other’s tools. For instance, a platform engineering team will build self-service tools to eliminate repetitive setup work for application developers—the platform team “produces” automation that other engineers “consume”, with both groups aligned in wanting to remove manual toil. Google’s Site Reliability Engineering (SRE) discipline explicitly treats toil reduction as a shared goal between those who maintain systems and those who use them. SRE guidelines note that “50% of each SRE’s time should be spent on engineering project work that will either reduce future toil or add service features”, because if left unchecked, “toil tends to expand… and can quickly fill 100% of everyone’s time”. This highlights that from the producer side (engineers), fighting toil is crucial to prevent burnout and to enable progress—a theme we’ll explore more below. From the consumer side (users), reducing toil translates to more reliable and responsive services (since engineers who aren’t firefighting toil can deliver improvements). Thus, both sides invest in toil reduction and both reap rewards.
By reframing producers and consumers as allies, we see that every painful manual process is a potential win-win scenario. The firm or engineer who automates it gains a competitive or productivity advantage, and the user or worker freed from that task gains time and relief. This is a fundamentally cooperative view, one that Marx might approve of in the sense that it seeks to overcome alienation: rather than technology being used against the worker or user, it is deployed on their behalf. When guided by the right ethos, software development becomes a joint venture where the success is measured in human toil eliminated. Both classes—those who make the software and those who use it—find common ground in the pursuit of less pain and more freedom.
When automation reduces toil but increases strain
One might assume that advanced software and automation would alleviate these pressures by taking over drudge work and freeing humans for more fulfilling tasks. Indeed, automation has eliminated much routine toil—from factories run by robots to scripts that handle repetitive office tasks. But paradoxically, automating work can intensify the demands on human workers’ cognition and nerves. This is a classic problem engineers call the “Ironies of Automation.” As early as 1983, engineer Lisanne Bainbridge observed that when you automate the easy parts of a job, you often leave humans with the hardest parts—the exceptions, emergencies, and oversight of complex systems. The human operator becomes a stressed monitor, forced to remain vigilant while bored, then suddenly expected to intervene in a crisis with sharp skills that, ironically, atrophy during long stretches of automated routine. In short, automation tends to remove regular practice but not responsibility. We see this in many domains: airline pilots rely on autopilot but must be alert for rare failures; warehouse pickers follow algorithmic schedules that set a relentless pace; content moderators use AI filters but then must personally review the most gruesome, traumatizing material the AI flags. Far from a leisure utopia, automation often creates new mental and emotional burdens on workers.
In creative and knowledge work, the effect is more subtle but still present. Software automation can accelerate the pace of output and raise expectations, which may lead to work intensification rather than relaxation. A clear example is the software development pipeline. Modern dev teams use automation at every step—automated testing, continuous integration and deployment, AI-assisted code suggestions—ostensibly to ease developers’ workload. Yet the outcome is that code can be shipped faster and in greater volume, which often means programmers are expected to juggle even more tasks in parallel. They must manage automated test failures, continuous streams of feedback, security alerts, and rapid release cycles. In effect, automation removes certain manual tasks but floods the cognitive bandwidth with overseeing automated processes and responding to their output. There is a constant “algorithmic drip” of notifications and status checks. The coder’s role shifts toward orchestration and supervision of tools—which, while less physically tiring than manual coding or testing, can be mentally exhausting in its own way. It often entails high levels of context-switching (e.g. moving between code, CI dashboard, chat, project tracker) and a feeling of never being caught up because the automated pipeline runs 24/7.Beyond coding, consider the gig economy or “platform labor” facilitated by algorithms: Uber’s software automates dispatching and pricing, but drivers now must contend with an app that nudges them to work longer (through gamified incentives), tracks their every move via GPS, and can deactivate them if metrics fall short. The manual driving might be less structured than, say, an old taxi company, but the affective and cognitive load—monitoring the app, maintaining ratings, strategizing to meet opaque algorithmic criteria—is significant. Similarly, Amazon warehouse workers get algorithmically optimized routes and automated scanners; the result is a notorious speed-up with some workers literally unable to take restroom breaks. These examples show how digital automation can become a form of “algorithmic management” that pushes humans to perform like cogs in a machine, sometimes amplifying stress even as it saves physical effort.
Even in white-collar settings, software tends to fill every freed minute with new tasks. If email sorting is automated, you simply end up processing more email faster; if scheduling meetings is automated (via Calendly etc.), you often get more meetings. The void is always filled in a competitive work culture. In theory, automation could give us leisure; in practice, under capitalism it often gives us pressure to do even more. That pressure is both quantitative (more work units per hour) and qualitative (the remaining work demands more concentration or emotional input). As one work design scholar notes, automating simpler tasks can increase the cognitive load on workers by shifting them to monitoring roles and creating “automation surprises” that require intense human intervention at a moment’s notice. The workload also becomes more erratic—long periods of low engagement punctuated by spikes of high-stakes decision-making—which is itself stressful. In creative fields, automation might handle routine production (say, a graphic designer’s layout or a writer’s grammar checks), but then clients simply raise their expectations for turnaround and volume. What the software saves, the business often recaptures as surplus output. This dynamic leaves many creative professionals feeling like hamsters on a wheel that only spins faster after each efficiency gain.
While software automation has undoubtedly removed much manual drudgery, it has not removed—and in some cases has worsened—the affective and cognitive strains of work. The grind has shifted from the factory floor to the mind. As Berardi writes, today’s knowledge worker sits in front of a screen, and “our bodies disappear” while the brain and nerves are continuously engaged. The pain points of work—once sore muscles—are now burnout, anxiety, attention fatigue, and tech-induced stress injuries (from eye strain to “Zoom fatigue”). Automation hasn’t yet delivered a humane workload; instead, it often augments managerial control and intensifies the tempo of labor. Recognizing this is key to designing a better future of work.
Humane software design and the New Age ethics of productivity?
The above dynamics carry significant ethical and political implications. If modern software and platforms contribute to fracturing workers’ time and sapping their emotional well-being, how might we design systems that truly reduce human suffering rather than compound it? This is not just a technical question but a moral and political one. It forces us to ask: What is the ultimate goal of our productivity tools and work platforms? If the goal is solely efficiency and profit, we get the status quo—workers as optimized inputs, their “souls” at work until they collapse. A more ethical vision would aim for human flourishing, dignity, and sustainability in labor. That means rethinking design priorities and power relations in the workplace.
First, at the level of software and UX design, there is growing recognition that tools must balance productivity with well-being. For instance, even Slack’s own research found that employees who unplug after hours are more productive (20% higher productivity) than those who routinely work late. Those who felt obligated to stay connected experienced twice the burnout and significantly more stress. The takeaway is clear: it benefits no one in the long run to drive workers into the ground. Humane software design could take cues from this by encouraging reasonable boundaries. Concretely, this could mean defaults that discourage out-of-hours messaging (e.g. reminding senders that a recipient is offline, or scheduling messages to be delivered in the morning by default). Some tools have introduced “do not disturb” modes or smarter notification batching—features aimed at giving workers back control of their attention. But tech alone can’t solve it; organizational culture and norms must change in tandem. Software could support a culture of asynchronous communication and deep focus time. For example, instead of glorifying instant responsiveness, project platforms might reward thoughtful completion of work and respect for focus. This might look like asynchronous project management, where updates happen in documented threads (as in Basecamp or in Jira tickets) rather than interruptive chat, and where teammates don’t expect each other to answer immediately unless it’s truly urgent. Such practices protect chunks of “flow” time for creative workers to actually think and create—addressing the fragmented time problem at its core. The key is designing systems that complement human cognitive rhythms (which need periods of uninterrupted focus and genuine rest) rather than constantly violating them.
Secondly, on an ethical and political front, there is a call for worker agency and rights in how these platforms operate. One aspect is legislative or policy action—for example, the “Right to Disconnect” laws emerging in countries like France, which mandate that companies cannot retaliate against employees for ignoring work communications during off hours. Such laws essentially force a boundary that technology had eroded. They compel employers (and by extension, the software they deploy) to respect human limits. In the U.S., discussions of a right-to-disconnect are growing, and while no federal law exists yet, some forward-looking companies voluntarily adopt policies to discourage after-hours emailing and messaging. Politically, this reflects a broader pushback against the totalization of work life. It suggests that we, as a society, can decide to set humane limits—that being an employee or a freelancer does not mean signing away one’s nights and weekends to an infinite feed of pings.
Another political dimension is worker voice in the design of work platforms. Too often, software systems are imposed top-down, with workers having little say in tools that profoundly shape their day. Empowering workers—through unions, works councils, or even participatory design processes—can lead to platforms that serve human needs better. For instance, if gig workers or content moderators had a say, they might demand features that allow pausing tasks when overwhelmed, or transparency in algorithms so they can plan their work rather than be ambushed by sudden changes. In creative industries, employees are beginning to question the barrage of tracking and metrics. Some tech companies have seen staff push back on “productivity score” tools or excessive monitoring, on the grounds that these erode trust and mental health. Ethically designed software would treat workers not as abstract “users” to be nudged and optimized, but as partners whose welfare is a core design goal.
There is also a philosophical shift needed: reclaiming the idea that technology should serve human values, not undermine them. Early internet visionaries imagined digital tools might free us for more leisure, community, and creativity. In reality, we got the “burnout society.” But the future isn’t written in stone. Designers and engineers can draw on principles of calm technology (designing tools that inform without overwhelming) and slow tech movements that prioritize quality of life. Even simple changes—like emphasizing outcomes over hours—can help. For example, project software could be tweaked to focus less on minute-by-minute online presence and more on deliverables, giving workers flexibility in how and when they work, which is known to reduce stress. The rise of remote work during the pandemic opened many eyes to the possibility of organizing work more humanely: flexible schedules, outcome-based evaluation, and the elimination of many pointless meetings (or their replacement with async updates). However, it also showed the perils of working from home without boundaries—hence the critical need to establish norms and tools that protect personal space and rejuvenation time.
Finally, addressing the “deeper pain of fragmented time and emotional depletion” requires acknowledging that worker well-being is an end in itself, not just a means to productivity. This is a profoundly political stance because it challenges the profit-driven logic that has dominated software development for workplace use. It suggests that maybe the next wave of innovation in productivity tech should not be about squeezing out another 5% efficiency, but about radically improving the quality of work life. Imagine software dashboards that, alongside KPIs, show team well-being metrics (collected anonymously) and encourage leaders to adjust workloads accordingly. Or AI that’s purposed not just to assign more tasks, but to detect when a worker has been at it too long and needs a break—essentially an AI that defends the human from overwork, rather than an AI that surveils the human for slacking. These ideas flip the script on the usual power dynamics of workplace tech.
In conclusion, the evolution from physical to cognitive labor has brought us to a crossroads. The same digital tools that have liberated us from much back-breaking labor have, in many cases, chained our minds and hearts to a ceaseless work cycle. Scholars like Berardi, Fisher, Han, and Federici help us see that this is not just a personal problem or an inevitable fact—it is a result of choices in how we structure work and design technology under late capitalism. The good news is that what has been designed by humans can be redesigned. We can insist that software and platforms be evaluated not only on their efficiency gains, but on their impact on human happiness, community, and time. We can foster a culture that values rest, deep work, and emotional sustainability as much as productivity—recognizing that, in creative endeavors especially, a mind at ease and a life with balance produce the best, most original results. The challenge is both ethical and political: it requires technologists, workers, and policymakers to pull together and reimagine digital work environments that truly empower the worker, body and soul. Such a reimagining might finally deliver on the old dream that technology would reduce toil—not by simply extracting more output from each of us with less sweat, but by giving us back the time and energy to be fully human.
The materialist response?
Yes, cognitive labor is exhausting. Yes, Slack notifications fragment our time. And yes, many workers feel emotionally depleted by the overload of modern work tools. But the problem is not that we are asked to work; it is that we have forgotten why we are working in the first place.
The current conversation around burnout often treats exhaustion as an emotional or psychological failure, or as a culture problem to be fixed by mood-management: meditate more, set boundaries, turn off notifications. But these are surface-level treatments. They misdiagnose the condition. The real crisis is not one of “too many pings”—it is a loss of teleological clarity. In Marx’s language: we no longer understand the relation between labor, technology, and the productive transformation of society. From a historical-materialist perspective, the only meaningful benchmark for software is whether it improves the productivity of labor—whether it allows a worker, a team, or an organization to accomplish more valuable output with fewer inputs. Good software doesn’t simply manage attention or set boundaries; it transforms the structure of tasks, compresses coordination time, automates redundant steps, and unlocks scale. If software instead fragments time, proliferates micro-tasks, or generates anxiety without measurable gains in output, then its failure is not moral but technical: it is inefficient, bloated, and unfit for the productive mode it claims to advance.
This is not a wellness crisis. It is a productivity crisis, masquerading as a psychological one.
Software was never meant to make us happy. Its purpose is not emotional equilibrium or mindfulness. Its purpose, like that of any tool, is to change the conditions of production—to help us do valuable work faster, with less waste. That is the measure of its goodness: not how it feels to use, but what it enables us to accomplish. From this vantage, Slack and Jira are not inherently oppressive or broken—they are only as good as the production systems they serve. If they contribute to the fragmentation of time without yielding proportionate gains in productive output, then the solution is not “work-life balance.” It is better software, embedded in clearer workflows, designed to reduce real toil, not simulate activity. Rather than designing tools to coddle overstimulated knowledge workers, we should focus on tools that clarify objectives and eliminate waste. The ethical imperative is not to mute Slack notifications—it is to render them unnecessary by building systems that reduce the need for constant reactive communication in the first place. Automation, data synchronization, shared source-of-truth models—these are not lifestyle upgrades; they are structural interventions that reduce toil. They are material contributions to the reorganization of labor.
This is not a dismissal of emotional pain or cognitive fatigue. It is an insistence that these symptoms must be read in light of material conditions, not cultural vibes. Burnout isn’t a feeling—it’s a signal. It tells us that the current structure of work is producing more friction than value. But this doesn’t mean we should retreat into therapeutic culture or recoil from productivity altogether. It means we must refocus on building systems—technological and organizational—that actually increase the rate of real productive work per unit of human effort. That’s the Marxian project, applied to code.
From this angle, many well-meaning proposals to mitigate burnout—right-to-disconnect policies, enforced quiet hours, or stress detection algorithms—miss the mark. They respond to the symptoms of structural inefficiency with behavioral patches. But no amount of “work-life balance” will fix systems where the unit economics of communication and coordination are broken. The problem is not too many messages; it is too many dependencies and too little clarity about what needs to be done, by whom, and when. A Slack ping at 11pm isn’t the problem. The problem is the unresolved decision it represents, still circulating days after it should have been closed by a clearer system. The solution isn’t fewer meetings or more wellness Slack channels. It’s to write better software. Not just more usable, but more efficient, more intelligent, more integrated—code that actually transforms workflows, replaces toil, and compounds human effort.
Burnout is real. But it is not a crisis of feelings. It is a crisis of misaligned systems. Workers are right to be tired. They are being asked to participate in platforms that simulate productivity without transforming the cost structure of labor. The way out is not to demand less work or slower work—but to return to the materialist ethic that understands progress as the reduction of socially necessary labor time, achieved through real improvements in the productive process. Not therapy. Not vibes. Better tools.
The political dimension here is not about empathy or balance—it is about control over the productive architecture of work. Workers burn out not simply because they’re overworked, but because their labor is poorly structured, continually interrupted, and misallocated by platforms that privilege activity over output. Giving workers agency does not mean giving them a “wellness stipend.” It means giving them systems that respect their time by removing unnecessary complexity and improving the density of productive effort. To do that, we need a different philosophy of design—one grounded in material efficiency. That means evaluating software not by how connected it makes us feel, but by how much friction it removes from workflows that matter. It means favoring asynchronous tools that reduce coordination overhead, favoring atomic systems that minimize dependencies, and treating each improvement in productivity not as a UX win, but as a historical contribution to the reconfiguration of labor.
What matters is not that software is humane in its tone, but that it materially improves the conditions of labor. That it transforms production modes in such a way that fewer hours are required to generate the same—or more—value. That is the original spirit of historical materialism: not that we feel good, but that we do less unnecessary work. Not that work disappears, but that toil is minimized. And not that we escape labor, but that we finally begin to reorganize it, scientifically and decisively, with code.
Let’s stop chasing emotional equilibrium in systems that aren’t designed to produce it. Instead, let’s build systems that work. Let’s measure our tools not by how they feel, but by what they achieve. That is the sober, historical-materialist response to burnout. Not tears. Not push notifications. Just better software.
Startups come and mostly go, but Builder.ai is an unusual case.
Having raised $445 million from investors including Microsoft Corp. and SoftBank Group Corp., the British firm entered insolvency proceedings this week after a major creditor seized $37 million from its accounts, leaving $5 million in the company’s coffers. Builder.ai’s former staff told Bloomberg News that it had been inflating its sales figures to investors, forcing the company to lower its sales estimates in March. But that wasn’t the only thing it inflated: When I investigated the startup back in 2019, workers told me that its core technology of building apps with artificial intelligence was mostly being done by software developers in Ukraine and India.
The company denied this at the time. (It also later changed its name to Builder.ai from Engineer.ai.) But a spate of other companies has been rapped over the past year for secretly using humans in the place of “AI” thanks to crackdowns by the US Securities and Exchange Commission, Department of Justice and Federal Trade Commission. A less egregious approach is to exaggerate how cutting-edge their tech truly is. In both cases, customers and investors take the bait because they don’t do proper due diligence — and because the definition of “artificial intelligence” itself is so grey, its underpinnings so difficult for non-technical people to parse, that its sellers can get away with slapping the label on more basic software. Or, at least, they could.
Similarly from the perspective of competitive market place, in much of today’s discourse around software, value is measured in terms of user delight, speed of delivery, or the polish of the interface. Success is equated with keeping customers happy, shipping faster, or building more “seamless” experiences. But these are surface metrics. They obscure a deeper, more objective reality: what makes software truly valuable is not how it feels, but how it functions—how effectively it helps real work get done.
This is where a Marxian lens proves clarifying. In Marx’s historical materialism, the central driver of social change is the transformation of the productive forces—how labor is organized, how tools evolve, and how much time is required to meet material needs. From this perspective, software isn’t a lifestyle product or a UX canvas—it is part of the mode of production. Its quality should be judged not by user sentiment or aesthetics, but by whether it meaningfully reduces the labor time required to accomplish valuable tasks. Good software compresses time. It removes inefficiencies. It unlocks scale. In doing so, it shifts the boundary of what is economically and organizationally possible.
This reframing also challenges the fantasy of sudden, disruptive leaps. Social and technical progress rarely unfolds through revolutions; more often, it accumulates gradually, through small, iterative improvements. In this sense, the true historical movement of our time may not be found in sweeping declarations or moonshots, but in the steady advance of productivity—one startup at a time. Each company that replaces a manual process with automation, that resolves a coordination bottleneck, or that scales a once-fragile operation, contributes to a larger arc of historical development, even if imperceptibly.
This is the logic of historical materialism rendered in code: not grand ideological battles, but quiet, compounding gains in efficiency. Software, at its best, is not a consumer good. It is infrastructure for human labor. It changes what work is required, how it’s distributed, and how much effort it takes to produce real value. The significance of a product lies not in how it makes someone feel in the moment, but in how it reorganizes the workflow that underpins a business, a profession, or a society.
This is not to dismiss usability or responsiveness. Those matter—but only insofar as they serve a larger goal: to make work more effective, and to do so with less waste. A faster load time matters if it increases throughput. A better onboarding flow matters if it accelerates time-to-competence. What distinguishes great software is not how fast it is, or how pretty it looks, but how deeply it embeds itself into a process—and makes that process better, cheaper, faster, or more scalable. In other words, it changes the cost structure of work itself.
From this angle, the moral imperative of software is not to generate delight, but to generate leverage. Its ethical value lies in its ability to reduce human toil—not through superficial optimization, but through genuine increases in labor efficiency. It doesn’t merely smooth the surface of work; it reshapes its underlying geometry.
In this light, we might understand the evolution of software not as a series of consumer innovations, but as the technocratic continuation of historical materialism. Progress happens not with slogans, but with systems. Not with revolutions, but with refactors. It is not declared; it is committed—one pull request at a time.