Category Archives: modern economics

What David Graeber and David Wengrow’s “The Dawn of Everything” has in common with sci-fi economics

A few weeks ago, I finished David Graeber and David Wengrow’s The Dawn of Everything. It is not science fiction, nor it is economics. And yet, I propose it has to do with both. The book begins by stating inequality has risen to the top of the agenda for public debate. Scholars, politicians,  business leaders are calling attention to it. Where, they ask, does inequality come from? How can we reduce it to tolerable levels?

The authors then proceed to question these questions. A lot of people believe there is this thing called “inequality”, and it comes from somewhere – it was not always there. This belief is cultural, and we can investigate it. So, Graeber and Wengrow’s research question is not “where does inequality come from?”. It is “where does the question about where inequality comes from come from?”

It is a move typical of anthropology. In fact, Graeber does the exact same thing in the opening chapter of Debt: The First 5,000 Years . He finds himself at a party, and the conversation falls on the 2011 Greek debt crisis. A guest remarks that yes, the Greek people are suffering, but “debts must be paid”! And Graeber wonders: why do people think that? where does this belief come from? And off he goes.

I am a big fan of Debt. I read it twice, back to back, and went back to it several times to re-read inspiring digressions from the main theme. While worth reading, Dawn is not as good as Debt, in my humble opinion. Its arguments are not as thorough, and it tends to treat absence of evidence as evidence of absence. But it does make an important contribution: it replaces the question on the origin of inequality with a better question. That better question is: how did we get stuck?

It works like this. The question about the origins of inequality implies a linear social process. Back in humanity’s hunting-gathering days, the story goes, all men and women were equals. But then, about 10,000 years ago, humanity shifted to agriculture. This created regimes of private property, cities, complex societies, wars for resources and élites to appropriate their surplus. So, we got inequality, and we are stuck with it, because it is the price to pay to have a complex society.

The problem with this story is that is not borne out by what we know. Graeber and Wengrow’s data consist of archeological findings and ethnographies of indigenous societies encountered by European settlers during colonial times. And these, the authors tell us, agree: there is no linear process from equality to inequality. Ancient societies appear to have experimented with many models. Foragers experimented with farming, then let it go. Large urban settlements arose in the absence of agriculture. Farming societies remained egalitarian for centuries. Cities dominated by élites abandoned the construction of pyramids and temples to embark on large-scale social housing projects. There are even documented societies that lived in towns, and farmed, during the winter, and in small bands of hunters gatherers during the summer.

The authors insist that all this happened because the people that made up those societies wanted it to. They were politically sophisticated and reflective. They knew that they could shape their institutions in ways that preserved their freedom and well-being. A part of getting this right was making sure people did not have too much power over one another: in our modern terms, that people would equals.

Ancient societies were not stuck. Its members were free to roam, and were subject to very little cohercion. But entire societies were also free in another sense, that of shaping arrangements that made people’s lives better. Instead, we moderns are very stuck.

And that brings me to the Sci-Fi Economics Lab. When my partners and I dreamed up the Lab, in 2019, we had no idea what Graeber and Wengrow were up to. Yet, like them, we felt stuck. We felt crippled by our inability to imagine living under any system other than late-stage capitalism. To heal, we turned to the imagined futures of science fiction, and to economics as an angle of attack. It was a good choice: we have come some way to freeing our imagination. Today, sci-fi stories set in the world of Witness give us a glimpse of everyday life in post-capitalist systems. Graeber and Wengrow appear to have taken the opposite route, into humanity’s distant past. But we share the same curiosity, and the same conditional optimisms, and I know them for kindred spirits.

Sociopathic innovation: how we are investing most in the most evil technologies (LONG)

TL;dr

Artificial intelligence and the blockchain are the two main technological hypes of the past fifteen years. Both were hailed as technologies with the potential to solve many problems and change the world, for the better. It now looks like their impact is overwhelmingly negative. Though they could be used for the common good, it turns out they are not very good at that. They are better, far better, at harming humans than at helping them. They encode dystopian, sociopathic world views; and tend to attract developers, investors and entrepreneurs that share those world views. So, once deployed, they tend to bring the world closer to them. They are sociopathic tech. This is disturbing, because mostly everyone fell for them: investors, developers, entrepreneurs, academics, government officials. I call for a re-examination of the achievements of these technologies and the impact they are having on our life and our societies. I would like to see support to innovation systems depend on how new technologies improve the well-being of humans and of the planet, and only on that. In what follows, I review some of the facts as a discussion starter.

Of how Artificial Intelligence excels at everything, except solving problems that matter

I recently had the opportunity to be exposed to the work of Juan Mateos-Garcia, a leading data scientist. Juan and his team had been looking at a large dataset of science papers published on the topic of Artificial Intelligence (AI). Their results look like this:

  1. AI has been undergoing a revolution since about 2012, when deep learning started to systematically outperform established techniques.
  2. Scientific production (papers) is booming. AI is shaping to be a general-purpose technology, like electricity or computing itself.
  3. Industry interest is evident. Many top scientists have been recruited from academia into industry. Venture capitalists have moved to invest in AI startups. Major governments are underwriting large public investments. There are talks of a “AI arms race” between China, the USA and the EU.
  4. AI is dominated by a small number of organisations and geographic clusters. Diversity of its workforce has stagnated.
  5. AI has had no impact on the effort to beat back the COVID-19 pandemic. In fact, all other things being equal, a paper on COVID is more likely to be cited by other papers if it is not about AI.

This final point gave me pause. Something was off. Why would AI not make a valid contribution to fighting the COVID plague? The conditions all seemed to be in place: there was, and still is, plenty of funding for research on COVID. There is a large, inelastic demand for the applications of that research, like vaccines. There is plenty of training data being generated by health care institutions the world over. And, if AI is a general purpose technology, it should apply to any problem, including COVID. The most exciting technology of the moment somehow failed to contribute to solving the most pressing problem of the moment. Why is that?

I can imagine a world where AI is deployed to help in the fight against a pandemic. We would use it to engineer a more targeted response to the risks of contagion. Granular risk scores could be associated to individual people and different situations, allowing society to protect the most vulnerable people from the riskiest situations, while leaving low-risk individuals in low-risk contexts free to get on with their lives.

Sounds good, but that world is not the one we live in. In our world, AI-powered, individually customized COVID restrictions would run into non-tractable problems. First, the algos would seize the high correlation between different socio-demographic variables, and decide that poor people, people of color and (in America) trumpists are more prone to the contagion, and should stay at home more than white, affluent liberals. Discriminated groups would react fighting back, challenging the algos as biased, starting litigation and inviting to civil disobedience, as is happening time and time again. Even if there was no conflict and everybody trusted the algos, it is not clear how we would use effectively the predictions they make for us. First of all, there is the cognitive challenge of understanding the predictions. You could tell someone something like this: “the risk of catching COVID on public transport for someone with your demographic profile went up 20% today, avoid the bus if you can”. But that is unlikely to work, because

  • Most people do not understand risk. For example, they are more scared of terrorist attacks than they are of car crashes, though the latter are far more frequent (hence more dangerous) than the former.
  • AI is Bayesian statistics, and as such it makes prediction not on you, but on somebody who is like you in a quantifiable way. It leaves out everything that makes you unique, putting it in the error term. For example, imagine you are a 45-year old living in the Netherlands who is also an ultra-marathoner. The algo computing your risk factor processes your age and the country you live in, because it has thick enough data in those dimensions. Your ultramarathons stay in the error term, because there are not enough people doing ultramarathons for that activity to be tracked in its own variable. And yet, when looking at the overall resilience of your organism, this is clearly an important information.

Given this situation, I suspect most people would end up following their own belief system rather than the algo’s recommendations. People who fancy themselves strong and resilient might say “yes, this gizmo is predicting high risk, but it is not talking about me, I am healthier and stronger than most!”. Or, vice versa, “yes, a low risk is predicted for outdoor mingling, but with my history of allergies I still don’t feel safe”. This is de facto happening right now with how people process scientific findings about COVID-19. Some people prefer to trust their own immune systems over the pharma-punditry complex. Others [made COVID restrictions into some kind of weird religion], following them “above and beyond” even when science is calling for their relaxations. Even if a good AI-powered risk prediction system were in place , many humans are way too irrational to take full advantage of it. They prefer simple rules, applicable to all: “1.5 meters”, “wash your hands” and such. The promise of AI, providing personalized recommendation to each and every one of us, clashes with the human need for stability and security. In conclusion, AI had no grip on COVID, and is unlikely to have any grip on any similar high-stakes problem. So, what is AI good for? We can start with the applications already being developed:

With the exception of machine translation, these applications are all detrimental to human well-being, for world-eating values of “detrimental”. We are seeing yet another example of Kranzberg’s First Law in action: AI is not good, nor is it evil, nor is it neutral. It could be used for good, though I am unconvinced it would work very well: but it is when you use it for evil, dehumanizing purposes that it really shines. That such a potentially toxic technology is attracting so much attention, public funding and private investment is a spectacular societal and policy failure. And that brings me to the blockchain.

Of the blockchain and its discontents

The blockchain, as by now everyone had to learn, is the name of a family of protocols that allow data storage not in a single repository, but in many. Using cryptography, the different computers who adhere to the same protocol validate each other’s copy of the database. This prevents a “rogue” participant from altering the records, as the alteration would only be present in a single computer and not be validated by the others. This system was first proposed to solve a problem called double spending when no trusted, centralized authority is present.

That was in 2008. In these 13 years, blockchain solutions have been proposed for many, many problems. To my knowledge, none worked, or at least none worked any better than competing solutions that used a more conventional database architecture. This makes sense, because blockchains are self-contained systems. They use cryptography to certify that in-database operations took place, but cannot certify anything that exists outside the database. Any system based on a blockchain relies on external sources of information, known as “oracles”. For example, if you were to build an identity system based on the blockchain, you would have to start by associating your name, date of birth etc. to a long string of digits. Once stored on the blockchain, the association is preserved, but some external “oracle” has to certify it before it gets stored. In the absence of a credible external certification, the system could work technically, but it would produce no impact. I could create my own identity system, but no one would use it, because I am not trustable enough when I issue a digital ID to your name. There are entities with the trustability to start such a system, for example major governments. But, because they are trustable, they do not need the blockchain at all. I have lost count of technologists who told me:

Any technology which is not an (alleged) currency and which incorporates blockchain anyway would always work better without it. (source)

But the blockchain is not just another clever technical solution in search of a problem to solve. I argue it is a major source of problems in itself. Consider this:

  • The distribution of Bitcoins is extremely unequal, with a Gini coefficient estimated at 0.95 in 2018 (theoretical maximum: 1; Lesotho, the most unequal country on the planet for which we have data: 0.65). In fact, inequality seems to be a feature of blockchains, not just of Bitcoin – for example, it is estimated]that the bulk of the monetary value conjured by Ethereum-based non-fungible tokens (NFTs) is appropriated by “already big-name artists and designers”.
  • Blockchains use a lot of power. Every update anywhere in the system needs to be validated by network consensus, which includes a lot of computers exchanging data. Bitcoin alone consumes about 150 Terawatt per hour, more than Argentina. Providing computer power to the Bitcoin network is rewarded in Bitcoins, through a process known as “mining”: this provides the incentive to underwrite all this computation. In bid to make what they see as easy money, Bitcoin miners have resorted to malware that infects people’s computers and gets them to compute SHA-256, incorporated into the builds of open source software projects; resurrected mothballed power stations that burn super-dirty waste coal; installed mining operations in Iranian mosques (which get electricity for free) and engaged in plain stealing. Their carbon footprint is enormous: one Bitcoin transaction generates the same amount of CO2 as 706,605 swipes of a Visa credit card. Some blockchains have less computationally expensive systems of verifications, but they are still more energy- and CO2-intensive than traditional databases.
  • Fraud – especially to the detriment of less experienced investors – is rampant in crypto.
  • Crypto has provided a monetization channel for ransomware attacks. Ransoms are demanded and paid in Bitcoin, untraceable by Interpol. Some observers go so far as to claim that the price of Bitcoin is tied to the volume of ransomware attacks. Hospitals and other health care institutions are among the main targets of these attacks: not only do they have to pay money, but their IT systems shut down, threatening the lives of patients.
  • In 2021, tech companies that used to donate CPU power to legitimate projects have had to stop doing so, citing the constant abuse from crypto miners. It is worth quoting the words of Drew DeVault:

Cryptocurrency has invented an entirely new category of internet abuse. CI services like mine are not alone in this struggle: JavaScript miners, botnets, and all kinds of other illicit cycles are being spent solving pointless math problems to make money for bad actors. \[…\] Someone found a way of monetizing stolen CPU cycles directly, so everyone who offered free CPU cycles for legitimate use-cases is now unable to provide those services. If not for cryptocurrency, these services would still be available. (source)

In return for this list of societal bads, so far, all the blockchain has to offer is a plethora of speculative financial assets: a casino. Which is also a societal bad, if you, like top innovation economist Mariana Mazzucato, believe that the economy is overfinancialized, and that policies should be put in place to roll financialization way back.

The blockchain is, overall, a net societal bad: it consumes resources to deliver a casino. Humanity would be better off without it. The picture gets even grimmer when you consider the opportunity costs: blockchain startups have gobbled an estimated 22 billion USD in venture capital funding from 2016 to 2021, very likely matched by various forms of government support, and that money could have been used in more benign ways. So, what’s going on here? Kranzberg’s First Law, yet again.

The original group of developers that rallied around Satoshi’s Nakamoto White Paper had a libertarian ideology: they dreamed of a trustless society, where contact is reduced to a minimum and anonymised, and were obsessed with property rights. So, they built a technology that encodes those values, which in turn attracted more people than believe in those values. Code is law, they said. If someone can technically do something, that something is allowed, even moral, under some kind of tech version of social Darwinism. When the DAO was hacked in 2016, exploiting vulnerabilities in the Ethereum blockchain, the perpetrator bragged about it: if I stole your money, it’s your own fault, because code is law. I am just smarter than you, and I deserve to walk away with your money.

Trustless societies do exist – the mob is one of them. But they are not a good place to live. Economists and social scientists think of trust as social capital, and seek ways to build it up, via accountability and transparency. Again, the blockchain could conceivably be used for something good, but in practice almost all of its uses contribute to making the world a worse place, while making money for the top 0.1% of crypto holders. This is because the tech itself embodies evil values, and because the social coalition behind it upholds these values. Don’t take it from me, take it from open source developer Drew DeVault:

Cryptocurrency is one of the worst inventions of the 21st century. I am ashamed to share an industry with this exploitative grift. It has failed to be a useful currency, invented a new class of internet abuse, further enriched the rich, wasted staggering amounts of electricity, hastened climate change, ruined hundreds of otherwise promising projects, provided a climate for hundreds of scams to flourish, created shortages and price hikes for consumer hardware, and injected perverse incentives into technology everywhere. (source)

Or writer and designer Rachel Hawley:

NFTs seem like an on-the-nose invention of an anticapitalist morality play: a technology that delivers exponential gains to those already at the top by convincing everyone to collectively imagine that free, widely distributed artwork is actually a scarce commodity, all while destroying the _actual_ scarce resources of our planet. (source)

Or economist Nouriel Roubini, testifying to the U.S. Senate:

Until now, Bitcoin’s only real use has been to facilitate illegal activities such as drug transactions, tax evasion, avoidance of capital controls, or money laundering. (source)

Of how and why we are bad at supporting the right innovation

Why are the two most hyped technical innovation of the past 20 years, the blockchain and artificial intelligence, diminishing human well-being instead of enhancing it? Why are we investing in things that make our problems worse, when the world is facing environmental collapse? My working hypothesis is that the financial world will put money into anything that promises returns, with little humanitarian concerns. They lead the dance; and governments the world over have been captured into supporting anything that promises GDP growth. If I am right, it is important to decouple support to innovations from their growth implications, and throw our institutional support behind technologies that uphold human well-being over capital growth. Jason Hickel has some interesting thoughts in his book Less is More, and Mazzucato has forcefully made the point across the arc of her work. Time will tell; and I am confident that better minds than mine will cast more light onto the matter. But this question can no longer wait, and if you are working in one of these two tech ecosystems, you may want to ask your employer, and yourself, some hard questions.

Update 1

Thanks to all the fine folks that reacted to this piece, and gave me useful suggestions. Many people pointed out counterexamples (I owe this particularly nice one to Raffaele Miniaci). But of course, it is not a problem of finding counterexamples, but to assess the overall net impact of this particular bit of technological development on society. My answer may be wrong, but I am fairly confident that my question is right.

Another objection comes from Yudhanjaya Wijeratne, who says that, without giving a definition of AI, the whole first part is meaningless. I went back to Mateos-Garcia’s definition, which he borrowed from Brian Arthur:

Machines able to behave reasonably in a wide range of circumstances.

Depending on how you interpret “reasonably” and “wide”, this indeed captures everything from deep learning for facial recognition to the individually trained spam filter in my personal install of Thunderbird. The reason for this choice is probably that it enables a statistical test for structural change: in 2012 everything changed, more or less at the same time as an influential paper by Krizhewsky et. al was published. Output of AI papers went way up.

I am looking for a socio-economic definition, not a technological one. These technologies each catalyzed a “scene” of researchers, companies, investors, governments etc. What values and visions do these scenes embed? What do they want? The libertarian streak of the blockchain gang is clear. With AI, this is less obvious because AI has a much longer history, and you cannot define it technologically. I guess when I am talking about “AI” in this article, I refer to its post-2012 scene, with some fuzziness but still quite identifiable. This excludes the spam filter on my e-mail client, and should take care of Yudhanjaya’s objection. It also raises concern, for the surveillance-authoritarian streak that this scene has.

Update 2, 2024-11-04

Some time has gone by since this post, and we now know a bit more about real-world use cases of AI. Cory Doctorow has provided a helpful summary. It is a bit of a black book, unfortunately. An excerpt is copied below; or you could read the entire post on his blog.

The real AI harms come from the actual things that AI companies sell AI to do. There’s the AI gun-detector gadgets that the credulous Mayor Eric Adams put in NYC subways, which led to 2,749 invasive searches and turned up *zero* guns:

https://www.cbsnews.com/newyork/news/nycs-subway-weapons-detector-pilot-program-ends/

Any time AI is used to predict crime – predictive policing, bail determinations, Child Protective Services red flags – they magnify the biases already present in these systems, and, even worse, they give this bias the veneer of scientific neutrality. This process is called “empiricism-washing,” and you know you’re experiencing it when you hear some variation on “it’s just math, math can’t be racist”:

https://pluralistic.net/2020/06/23/cryptocidal-maniacs/#phrenology

When AI is used to replace customer service representatives, it systematically defrauds customers, while providing an “accountability sink” that allows the company to disclaim responsibility for the thefts:
https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

When AI is used to perform high-velocity “decision support” that is supposed to inform a “human in the loop,” it quickly overwhelms its human overseer, who takes on the role of “moral crumple zone,” pressing the “OK” button as fast as they can. This is bad enough when the sacrificial victim is a human overseeing, say, proctoring software that accuses remote students of cheating on their tests:

https://pluralistic.net/2022/02/16/unauthorized-paper/#cheating-anticheat

But it’s potentially lethal when the AI is a transcription engine that doctors have to use to feed notes to a data-hungry electronic health record system that is optimized to commit health insurance fraud by seeking out pretenses to “upcode” a patient’s treatment. *Those* AIs are prone to inventing things the doctor never said, inserting them into the record that the doctor is supposed to review, but remember, the only reason the AI is there at all is that the doctor is being asked to do so much paperwork that they don’t have time to treat their patients:

https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14

My point is that “worrying about AI” is a zero-sum game. When we train our fire on the stuff that isn’t important to the AI stock swindlers’ business-plans (like creating AI slop), we should remember that the AI companies could halt all of that activity and not lose a dime in revenue. By contrast, when we focus on AI applications that do the most direct harm – policing, health, security, customer service – we *also* focus on the AI applications that make the most *money* and drive the most investment.

Euro coins and banknotes

On Modern Monetary Theory, science fiction and where to look for the violence

I have been aware of the existence of Modern Monetary Theory for a while. It’s hard not to, especially now. SARS-COV-2 has convinced central bankers and heads of government to magick into existence trillions of Euros in relief packages almost overnight, with very little hand wringing on public deficits. In a way, we live in a MMT world now.

But I am lazy, and so I waited for the release of a book aimed at the general public as a primer. We now have one: Stephanie Kelton’s The Deficit Myth, released last week straight into the New York Times’s bestsellers list. I have read it. I imagine some of you have, too. So, we are ready to consider MMT as a potential building block of Sci-Fi Economics.

About Modern Monetary Theory

MMT’s main idea is: currency issuers can never, by definition, run out of the currency they issue, as long as that currency is “full fiat”, not pegged to something else (like gold or another currency). This has profound implications:

  1. A currency-issuing government does not spend tax revenue. Rather, it spends money into existence and taxes it out of existence.
  2. A currency-issuing government budget deficit is just a number on a spreadsheet, and has no economic significance.
  3. If a currency-issuing government issues the currency commonly accepted as payment for international trade, its foreign trade deficit is also just a number that has no economic significance. In today’s world, that would be the USA.
  4. Fiscal policy, not monetary policy, is the main tool for government intervention in the economy. Used well, it opens up a much broader array of outcomes than we are accustomed to seeing. More on this later.
  5. Inflation is a potentially serious problem, because a currency-issuing government could in principle stoke up a demand for more resources than are available in the economy, pushing prices up.
  6. However, inflation control as we do it today is inefficient. Moreover, it is inhumane. In many countries, authorities target a rate of unemployment that they think will not cause inflation (NAIRU). In Kelton’s vivid words, this policy “uses people as human shields against inflation”.

I find MMT compelling. It’s not even a theory, exactly: Kelton calls it a description. It’s based on accounting identities and careful consideration of the concrete legal mechanisms whereby the US Congress authorizes public federal expenditure, and the Federal Reserve issues and buys back securities. These are not theories or opinions, but facts. I cannot find any contestable claims here. So, at least for now, I accept that MMT holds true.

MMT and science fiction economies

Understanding the Public Service Employment program

I propose that some of the science-fictional economies we have been looking at are a good fit for MMT. It seems likely that those worlds are “MMT worlds”. To make this argument I have to go a bit deeper into MMT’s policy prescriptions.

MMT economists are fans of automatic stabilizers. These are components of public expenditure which react to the economic cycle, with no need for decision making. Taxes are an example: if the economy slows down and our income declines, our tax bill also declines, helping us to go through the difficult period.

The main policy prescription of The Deficit Myth is an unusual type of automatic stabilizer: a government job guarantee. The idea is this: the federal government hires anyone who is out of a job. It pays a not-very-attractive salary, but still a decent one, with health care and paid leave. When the economy is bullish, it is easy to find private sector jobs that pay better, so few people would want those government jobs. In a recession, though, many more people would take them rather than be unemployed. The number of people in these federal jobs, so, goes up and down according to the economic cycle, with no decision needed. This means perpetual full employment, which in turn means more buoyant consumption. This would help businesses get through the recession in a less traumatic way. Workers avoid great suffering and productivity decline associated to long-term unemployment.

Ok, but in practice what would these federal workers do? Kelton:

Several MMT economists have recommended that the jobs be oriented around building a care economy. Very generally, that means the federal government would commit to funding jobs that are aimed at caring for our people, our communities, and our planet.

There is a detailed proposal for such a program in the USA (report by Wray et. al.), called Public Service Employment (PSE). Its main policy objective is of course employment itself, but there is a list of additional ones:

  • To guarantee a basic human right to a job, as outlined in the UN Declaration of Human Rights and President Franklin D. Roosevelt’s call for an economic bill of rights.
  • To implement an employment safety net. […]
  • To serve the public purpose. […]
  • To be used as a vehicle for addressing other social ills—urban blight, environmental concerns, etc.

Only the federal government, as the currency issuer, can fund the PSE. But both Kelton and Wray insist that it should be up to the states and communities to decide what constitutes “public service” for them.

PSE is the cornerstone of MMT’s policy: if past experiences are anything to go by, it could employ between 5 and 25% of the labor force at any given time. That is a lot of people, and what they do matters. If we could really deploy this much workforce towards nonmarket objectives, there would be a lot we, as a society, could do.

Mariana Mazzucato rightly claims that innovation has not only a rate, but also a direction. MMT is compatible with expanding that statement: the whole economy has a direction, not just innovation. Given monetary sovereignty, policy makers can and should target objectives, or “missions” as Mazzucato prefers to say, that are not economic per se: go to the Moon, eliminate child poverty, beautify cities, reclaim ecosystems, abate aggressively CO2 emissions etc. This is what makes MMT so useful for sci-fi authors, and so attractive to me.

Provisioning, not paying

Kelton insists that, when it comes to public spending, “How will you pay for it?” is a meaningless question. Since currency-issuing governments create their own currency, by definition they pay for everything in the same way: they credit the Treasury account in the Central Bank. Treasury then goes on to use that account’s balance for paying salaries and bills. But there is a similar, meaningful question: “how will you provision it?” Which means: never mind financial resources, do the real resources actually exist to do what we want to do? Do we have enough skilled people, tons of steel, gigawatts of energy etc. to achieve our objectives? Are these resources lying fallow, or will we be competing for them with the private sector?

Kelton quotes excerpts from the speech president Kennedy addressed to Congress to ask it to approve the Apollo program. Kennedy used it to reassure representatives that America could put a manned flight on the Moon’s surface: the skills were there, the manufacturing capacity was there. He never mentioned money – he knew money not to be an issue.

A more passing reference is made at the WW2 wartime effort, the only time when America really achieved full employment. Again, what mattered to the strategists was provisioning the military: how many tanks can we make in a months? But wait, to bring them to Europe we will need extra ships – how many can we make of those? That depends on how many people we can hire in the shipyards and the steel mills providing them, which in turn depends on how much food and housing we can produce for the extra workers in those areas, and so on.

This is how the better thought-through science fiction economies work. Take Kim Stanley Robinson’s Mars Trilogy: in the first book, a hundred people and a lot of heavy industrial equipment land on Mars. Since on Mars there is nothing to buy, what they can do is limited by their resources.

In order to do anything (say, raising the athmospheric pressure as a first step towards terraforming) they need a habitable environment that protects them from cosmic radiation (or they will all soon develop cancer, and dead people do not terraform). But to build a habitable environment they first need to drill tunnels in the regolith, make enough air to pressurize them, and produce oxygen to make it breathable. This requires energy and plants. Fortunately, they brought nukes and a space greenhouse from Earth, but they need to manage them carefully across other possible (and competing) uses… you get the idea. Most of the Martian economy and society we see in the second and third book (except for people, since at some point Mars has strong immigration) are an outgrowth of the materiel and personnel landed in that one ship.

In economic terms, the Martian colonists are working with something similar to a Leontief matrix. So are the walkaways in Cory Doctorow’s *Walkaway". The latter have access to scavenging the default economy for unwanted resources, but at the end of the day they have to produce their own food, energy, vehicles, communication networks, with these things being both products and production inputs to other goods and services. Having transitioned to a moneyless economy, they face constraints in terms of real resources.

Other fictional worlds in sci-fi work have less of an explicit emphasis on Leontief-like input-output planning techniques. Still, they set themselves civilizational goals, and then shape their economies so that those goals can be attained. For example, the Acquis in Bruce Sterling’s The Caryatids is basically a gigantic operation to reclaim ecosystems lost to climate change and other man-made disasters. Earth superpowers in Paul McAuley’s Quiet War books and the Utopian Hive in Ada Palmer’s Terra Ignota have a similar attitude. All these are much more compatible with MMT than with standard issue neoclassical economics.

Where is the violence?

MMT could be an important piece of the completely different economic system so many of us are longing for. This is why we need to make sure we fully understand the conditions for it to work. Which brings me to the violence.

Vinay Gupta taught me to look for the violence implicit in societal and economic arrangements. This is important for those of us lucky enough to enjoy relative safety, stability and comfort, because it is tempting to assume that everyone is OK when we are. “The war has started – Vinay would say – and you did not notice because your side is winning.” So, where is the violence in an MMT world?

Why money is useful

According to MMT, a currency-issue government can never run out of the currency it itself issues. Moreover, that government is sure that everybody will always want more of that currency. Why? Because it demands people pay taxes to it, and those taxes must be paid in the government’s own currency. Why does this make the currency attractive? Because the government has the power, and the will, to harm those that refuse to pay taxes. According to MMT, taxes are not where government gets its money, because governments issue their own currency. They are there to make sure people accept that currency as payment. Without threat of violence, there is no currency in the MMT sense.

This view is fully consistent with historical evidence on how cash money was invented and adopted. I learnt it from David Graeber’s fantastic Debt. The First 5,000 Years. Here’s how it works: Athenian army engages in imperialistic expansion wars in the Aegean Sea. The problem is provisioning the army during the invasion, with the home agricultural land too far away. The solution is this: army attacks rival city, pillages its gold from temples, divides it up in small lumps, gives it to the soldiers. At the same time it announces that it is going to extract a tribute, in gold, from the occupied city. Athenian soldiers then walk up to farmers and exchange their gold against food. Farmers collect the gold and give it back to the Athenian occupation administration, which uses it to pay its goons and start the cycle all over again. Voilà: the occupied are now provisioning the occupants. Without violence, there is no cash.

The continuum of monetary sovereignty

The Deficit Myth repeats several times that MMT only applies to governments with “monetary sovereignty”. It then goes on to repeat that monetary sovereignty “is best thought of as a continuum”. A government has it if:

  1. It issues its own fiat, floating currency. This excludes local and city governments; states that use the currency of other states (like Costa Rica); states whose currency is pegged to the currency of other states (like Argentina before the corralito crisis); and the Eurozone countries, since the ECB, not they, is the issuer of the Euro.
  2. It does not carry heavy debt denominated in currencies other than its own. This excludes many middle- and low-income countries, like Mexico, Brazil and Indonesia.
  3. And then there is full monetary sovereignty. This term describes the USA’s unique position as the issuer of the currency used in international payments. They can ignore not only their internal budget deficit, but also their foreign trade deficit. In fact, the issuer of the global currency must have a trade deficit, otherwise there won’t be enough of that currency to carry out international trade. This is called the Triffin paradox.

It is easy to see that monetary sovereignty is highly correlated with sovereignty tout court. The stronger your economy, diplomacy and military, the more complete your monetary sovereignty. And the USA has by far the strongest military in the world. A good reason to accept the US dollar is that, if push comes to shove, the US might make you. A country could refuse to accept dollars as a payment, but it would probably suffer some diplomatic pressure, at least. It has even been claimed that the American invasion of Iraq was motivated by that country’s announcement, in 2000, that its oil exports were henceforth to be paid in Euro. Without a big military, there is no global currency.

To summarize…

MMT is an elegant, robust, pragmatic body of work in economics. It is heterodox, but solid and difficult to refute. It enables much more directionality in how we run our economies, and it allows a for a broader array of outcomes, including full employment. Its attention to real (rather than monetary) resources makes it a good candidate for running a sci-fi economy, especially in planet colonization scenarios.

At the same time, the acceptance of currency in MMT is predicated on the threat of violence. This is not to say that competing approaches (say, money supply theory) are any less violent. Nevertheless, when incorporating it in the systems we imagine in the Science Fiction Economics Lab, we need to pay attention to this violence, and make sure it is exercised with appropriate restraint, if at all.

Reposted from Edgeryders with minor modifications. Image: By Avij (talk · contribs) – Own work, Public Domain, https://commons.wikimedia.org/w/index.php?curid=30112364