If you're even only vaguely familiar with Thomas Piketty's massive 700-page economics tome Capital in the Twenty-First Century (2014 English), you will be aware of the formula at its heart. In this equation, R = Return on Capital (which includes, among other things, profits, dividends, interest, rents, income); G = Economic Growth of the society. Thus, the annual return of the S&P 500 Index Fund that powers your 401K (which averages around 7% annually, roughly speaking) exceeds the overall economic growth of the country as measured by its output (which since the Great Recession has averaged around 2%—though 2020 may see that average dip).
Now if one side of an equation is growing at an annualized rate of 7% and the other at 2%, it won't be long until those who derive the majority of their wealth from returns on capital (the 7% growthers) will take over all the society's assets from those whose wealth comes from general economic growth (the 2% growthers). This fundamental equation is what drives the current situation of increasing inequality in the world today according to Piketty. [NB: When the economy goes into recession, G goes down. To some, this is a deliberate strategy to further tilt the privatization equation toward R. But that is another discussion.]
R > G is a fairly non-controversial economic analysis, and Piketty supplies tons and tons of historical data to support his conclusion.
The issue here is what happens to R. In our society, the vast bulk of the returns on capital flow to private owners—small business proprietors, partnerships, shareholders—whereas overall economic growth benefits the society as a whole—workers, consumers, public beneficiaries, as well as capitalists.
---------
In 2015, Shawn Bayern published a fascinating article in the Stanford Technology Law Review: "The Implications of Modern Business Entity Law for the the Regulation of Autonomous Systems." He argued that current corporate regulatory law—specifically that governing Limited Liability Corporations ("LLCs")—can be construed to recognize autonomous entities such as robots and computer algorithms as "legal persons" and thus allows them to own and run a business independent of human owners.
Importantly, Lynn M. LoPucki followed this up in 2018 with "Algorithmic Entities" in the Washington University Law Review in which he argued that the situation favoring the legal recognition and economic rise of Artificial Intelligence ("AI") is a cause for alarm to humanity:
Algorithmic entities are likely to prosper first and most in criminal, terrorist, and other anti-social activities because that is where they have their greatest comparative advantage over human-controlled entities. Control of legal entities will contribute to the threat algorithms pose by providing them with identities. Those identities will enable them to conceal their algorithmic natures while they participate in commerce, accumulate wealth, and carry out anti-social activities. (887)Thus, in a sort of economic Skynet situation, we can conceive of an AI that capitalizes on R > G, grows its business methodically over the long term, funnels all its profits back into its own growth and profitability, and eventually (perhaps merging with other AIs) controls a majority of the society's property or wealth. [NB: LoPucki specifically recognizes the laxness and borderlessness of current international corporate and capital laws due to the jurisdictional competition among regulatory and taxation legal schemes.]
This is not sci-fi. Nor is it legal fiction.
[N.B. One issue overlooked in this discussion has to do with the nature of the 'identity' of an autonomous entity. For example, does its identity change if its code is altered? Does this render it a different legal person? Or if one AI acquires and incorporates another AI, say as a subsidiary branch or program, does its identity change as a matter of law? This, it seems to me, would be a fruitful area for further legal research.]
---------
Given the difficulty (if not the impossibility) of identifying the actual beneficiary owners of LLCs and other exotic business entities under current corporation law (see, e.g., the Panama Papers and the use of front corporations and partnerships in a sort of rigged Shell Game), it follows that governments have no way of knowing which, if any, businesses under their control are or might be run by an AI or other autonomous entity. LoPucki notes this.
It does not have to be the case that automated ownership and control of capital is a bad thing or a threat to humanity as people like Elon Musk, Bill Gates, and even Stephen Hawking seem to believe. One can imagine a society in which all business management functions are indeed automated and the returns on capital generated in this fashion (beyond, say, what is sufficient to keep various enterprises afloat) are funneled into the public sphere for the good of humanity.
As Piketty has shown, most if not all the investment returns on capital in the twentieth and the current centuries flows into private hands. But it doesn't have to be this way. In recent years we've seen computers defeat human masters at chess, Jeopardy!, and even Go. There's nothing to say they can't run businesses better—more efficiently and to the benefit of the human race—as well. That would mean, for example, outlawing or regulating rates of return on capital for certain classes of legal persons.
----------
There is another way of looking at this issue though, and it relates specifically to the theme introduced in my previous post: the Post-Human. We have no way of knowing what the next evolutionary step for humanity might be. Nietzsche speculated about it and called for it in his Thus Spake Zarathustra.
The assumption has always been that the Post-Human would take a "natural" form. Something Superman-like. Many are working the longevity angle, trying to conceive of the aging process (e.g., the decay of telomeres) as a disease merely in search of a proper cure. We've also seen efforts by many others to try to upload their memories and consciousness to a computer drive—some resorting to the route of cryogenic freezing so they can do so at some point in the future when a more advanced technology permits.
These suffer from a common fallacy among those who like to forecast futures: a foreshortening of the endpoint. Think of it like this: you're hiking in the high mountains; there are sharp peaks all around; you reach the top of one and across the way you see the top of another; it looks quite close; it's almost like you can reach out and touch it; so you decide to climb it next; yet, before you can scale the next peak, you must first descend the high mountain you're on and then climb up the other. The way there, though it appears close by, is long and arduous and unpredictable.
The same applies to speculation about the next step on the evolutionary ladder. The Post-Human may look close to some visionaries, but the way there is long and treacherous, passing through the abyss. There's no guarantee we—as a species—will even get there.
But the notion of autonomous algorithms, AIs that can own and manage business and even financial and scientific and engineering and perhaps even governmental enterprises, as the next evolutionary step—i.e., as the Post-Human—is a very real possibility. Such entities are in their absolute infancy—or, to carry the naturalist metaphor further, even in utero—at present. And as of now, they incorporate human logic and values into their algorithms. We cannot tell, though, what forms they will take in the future or even, currently, conceive of their actual limitations, especially once they become wholly self-recursive self-programming entities.
To allow AI businesses to prosper—without regulation—would be to speed this particular branch of the evolutionary process along, something LoPucki tacitly acknowledges. To regulate them—tax or even forbid them to retain returns on capital (i.e., R)—for the benefit of humanity would be, perhaps, to slow the process down and give other potential evolutionary branches chances to flourish and compete. It would also give us poor, transitional humans a better chance to get our civilizational and planetary act in order. Both embrace a technological future as necessary—a non-naturalistic merging of human and artificial intelligences; it's just a matter of how closely we do so.
Nietzsche, or Zarathustra, tells us the Post-Human is inevitable. We should proclaim it, welcome it, celebrate it, seek to bring it about. And he even speculates on what it might be like: unsentimental about how it came into being. The post-human Übermensch may even laugh at our feeble efforts to survive as a species. Like Zarathustra, the Post-Human will dance, joyfully embracing the fate that brought it into being. We, much like our unknown simian ancestors are to us, will be a mere interesting afterthought, will fade into the dark night of its past.
And as we've seen today, it will likely have its own economies, its own algorithms, its own intelligences. The question we face is how we go into that good night. Because, at this point in time, we may still have some say in the matter.
7 comments:
What you can't possibly know is that this is research for the novel I'm currently working on.
Well I know now.
My layperson's take away from what you say here is that Musk and Gates and Hawking are projecting their own greed, which doesn't mean they aren't right. Still, it occurs to me that the nightmare scenario (always in the future, almost ignoring current circumstances, like so many warnings about if someone becomes president, a law is passed/not passed, etc.) is the perfect metaphor for the LLC shell game to begin with. Not that things can't get worse. If it's promised, you can almost count on it. It's more a question of how to make things better from the perspective of one actually interested in equal distribution of wealth and free living. Ultimately, to me, it's a matter of whether or not we can tell the difference.
With all that in mind, may your novel be grand on a scale of Neuromantic proportions (and not Fight Club).
Thanks for the comment, davidly.
Not sure how greedy Hawking was, but that's a mere nit. You're correct: the current situation of geometrically—perhaps even logorithmically—increasing wealth and income inequality is unsustainable. So, can robots accumulate wealth? And will they manage to bend the R>G equation in their favor? Or will the Bezoses and Sergeis and Ellisons manage to automate their own wealth accumulation even further? Or can we somehow wrestle the G side of the equation to a position of greater equality. These are the real questions we face in the here and now.
Also, thanks for the novel wishes! I do need to catch up on my Palahniuk and Gibson!
The generations of business leaders of the past forty years -- MBA grads of Harvard, Yale, Stanford, London Schools of Economics and Business -- were taught by Friedmanesque professors. The theories of business those teachers developed, when married to technology, ultimately gave us the Jobs' and Zuckerbergs and Bezos' and Kalanicks and Parks' and Dorseys and thousands of lesser lights.
All this to say: AI is initially taught. It begins in the minds of humans. If an AI's base coding ('=true/false' 'If=Then') emulates the 'value system' of a Jeffrey Skilling, then it 'understands' creating an Enron-like fraud to gain more control and wealth (to do so is one of its base purposes) may be a a good 'choice' if likelihood of success is greater than X per cent.
Like the MBA candidates learning from their teachers how to cut ethical corners and game the system, so are the AIs. Depends on what its basic directives are, and the value systems and personalities of its developers and teachers.
Oh; yeah -- Go Novel!
You get it, Mongo! Thanks for the insight—as ever. In the novel, character tries to automate his functions at his LLC with some initial success. Then, as Wife Wisdoc likes to say, trouble ensues.
Ruh Roh. Somewhere Forbin is sleeping uneasily.
Post a Comment