These stories about AIs taking over the world in order to maximize a score often contain some computer that escapes the lab or such, in order to maximize some score. And indeed, it is true that taking over the world helps you to maximize whatever score you want to maximize.
But mankind, as a whole, is a clippy optimizer already, albeit a manual one. Right now it is us that is destroying life around us, replacing rainforest with cattle farms and suburban developments. We are already working on farming robots to serve our goals. Does the lizard care if it is crushed by a farming robot's wheel or an AI tank's chain?
We are not the strongest, nor the biggest, nor the fastest, nor the best hearing, or the best seeing species. Still, we are the apex predator of this planet, thanks to our sole distinguishing feature, which is our intelligence and ability to cooperate. And we don't realize it because economic growth seems normal.
These clippy stories speak of a not unfounded fear that development might not stop with us, that one day some life form spawns that is even smarter than us, and turns us, the predators, into irrelevant side pieces that need to get out of the way for fulfilling the objective of the more powerful life form, even if it was us who brought that life form into existence or gave it that objective.
The forests of today are not inhabited by the life form that first discovered photosynthesis, but instead by highly specialized structures that grow higher so that they can put their competition into the shade. Similarly, just because we are the first highly intelligent + cooperative life doesn't mean we will be the top of the world forever.
I'm not saying this with a light heart, I do feel sorry for the future humans, some of whom might even be alive today. But "how to contain reward optimizers" is a tough question, especially due to us running a giant project of reward optimization ourselves.
> We are not the strongest, nor the biggest, nor the fastest, nor the best hearing, or the best seeing species.
That is not strictly true. Even leaving aside the tools argument (a human on a bicycle is vastly more energy efficient over long distances than a wildebeest, for example), our sight and hearing are actually pretty good compared to the rest of the animal kingdom.
For a certain range of tasks, we have the best visual perception of any species. Our eyes themselves are pretty damn good for close-in and medium range work. The pattern recognition processing that input is very, very good due to our big brains. Tasks like being able to detect a tiger moving through the underbrush. Color vision is also very good (at least in daylight) to better pick out ripe fruit and such.
Our directional hearing is better than owls who hunt at night for a living. We can better measure the angle at which a sound source is coming from with around twice the accuracy of an owl. Again due to big brains and very complicated audio processing. The folds in your ears are there for a reason. We aren't as sensitive as other creatures like deer, who can steer their big ears (basically parabolic horns) in a particular direction.
And hand-eye coordination is far above any other species, even other primates.
Can't this just be chalked down to acquired skill made possible through superior intelligence? i.e
if other animals could practice and learn as effectively at us, perhaps a silverback gorilla or something could throw better than humans
I don't think so. As far as I know, other primates' arms lack our range of motion – for example, chimpanzees are unable to touch the back of their heads; that makes powerful throwing basically impossible.
I don't have a linky for this, but I'm sure people have tried to teach apes & chimps to throw. Aside from chimps teaching themselves to throw poo at zoo visitors.
But in general they don't have the neural hardware to precisely time the release as accurately as we do. Even a millisecond difference changes the trajectory a lot.
No source either, but I believe it also has to do with the composition of our muscle fibers. Other apes have more powerful fibers, but this comes at the cost of less accuracy. Throwing requires both.
A human on a bicycle is only more efficient than a wildebeest if the course being traveled is paved. If you had to bike the path a wildebeest roams you might not make it.
True, I remember reading that humans are the only animals that sweat. Some animals like hippos create thick oily sebaceous sweat but I don't think any other animals just spill water from their pores like us. That's what gives us such incredible endurance
I was intrigued by your comment, and did five minutes research,
Apparently you're half right - pretty much every mammal will sweat to some degree (its almost a defining feature of mammals - mammary glands are modified sweat glands) - but only a limited number of mammals sweat heavily, horses being another notable example.
No, endurance hunting is mostly about running. To quote Wikipedia [1]:
"Persistence hunting (sometimes called endurance hunting) is a hunting technique in which hunters, who may be slower than their prey over short distances, use a combination of running, walking and tracking to keep pursuing prey over prolonged time and distance until it is exhausted by fatigue or overheating. A persistence hunter must be able to run a long distance over an extended period."
> Still, we are the apex predator of this planet, thanks to our sole distinguishing feature, which is our intelligence and ability to cooperate.
Indeed, people seem to overlook the fact that superhuman intelligence already exists as human organizations. A company has the institutional knowledge to create things that no individual human could ever come close to, and many of them have the narrow focus of theoretical "paperclip maximizers." These organizations even have the ability to use there superhuman ability to improve themselves, and do to a certain extent, but they don't create the cartoonish runaway feedback loop people imagine. Reality ends up being a lot more difficult than handwaved fiction.
The idea of an AI apocalypse has become the quasi-religious dogma of certain groups, but it overlooks the real issues we face. I suppose when you see something everyday, it's easy to ignore its extraordinary nature.
> people seem to overlook the fact that superhuman intelligence already exists as human organizations
Culture is faster than meat.
Faster to inculcate. Faster to copy. Faster to communicate. Faster to update.
And it's a lot harder to kill.
The day our primate progenitors began passing down their own knowledge to subsequent generations was the greatest leap forward in global learning rate that our planet has ever seen.
I think the jury's still out on our impact of printing it out into a three ring binder.
> > people seem to overlook the fact that superhuman intelligence already exists as human organizations
> Culture is faster than meat.
> Faster to inculcate. Faster to copy. Faster to communicate. Faster to update.
> And it's a lot harder to kill.
> The day our primate progenitors began passing down their own knowledge to subsequent generations was the greatest leap forward in global learning rate that our planet has ever seen.
> I think the jury's still out on our impact of printing it out into a three ring binder.
Culture and memes are the software side of evolution.
Your body and genetics, it takes a really long time to evolve a different body (thousands of generations), but it takes very little time to update culture (couple of generations).
There is evidence that single generation evolution can happen with strong enough shocks. Nothing immediately obvious, just detectable DNA changes that lead to changes in risk for certain conditions.
Epigenetics and Genetics (DNA). Epigenetics allows for changes within an organism's lifetime, and include the ability to pass along environmental stressors to the next generation.
There is a great book from MIT press [Evolution in Four Dimensions](https://mitpress.mit.edu/books/evolution-four-dimensions) that goes into how Genetics, Epigenetics, behavior and symbolic transmission of data between generations guide evolution.
This is a good reason to start thinking about corporate lysing. Individual people who do bad stuff will die someday. Corporations don't have that problem and can grow unchecked, like a certain illness.
There's a good video by Robert Miles about that 'Why Not Just: Think of AGI Like a Corporation?' (youtube.com/watch?v=L5pUA3LsEaw)
Corporations are kind of like AIs, if you squint. How hard do you have to squint though, and is it worth it?
In this video we ask: Are corporations artificial general superintelligences?
> The idea of an AI apocalypse has become the quasi-religious dogma of certain groups, but it overlooks the real issues we face. I suppose when you see something everyday, it's easy to ignore its extraordinary nature.
Not at all.
Pointing out "here are some problems we face" does not mean that other problems don't exist. We can have more than one problem at the same time.
Oh that's a nice way to put it - members of an organisation pretty much are its neuron bundles, ones smart enough to introspect and reorganise themselves.
Though I think the organisation's intelligence might be more parallelised/horizontal than vertical, i.e. it can solve many problems simultaneously (manufacturing, accounting, strategy, R&D, HR, etc.), but not necessarily combine most of its members to solve extremely complex single problems (say, a mathematical one).
Extremely rarely relative to their power, and relative to control over individuals.
There is some regulation, and an occasional war to shatter a regime.
The problem with this exponential growth nonsense is that there are limits to growth. If we had 3% exponential growth for two thousand years we would be colonizing a big fraction of the galaxy by now.
The corona virus hasn't grown exponentially, there have been ups and downs.
With capitalist economies there are depressions and recessions. The napkin math doesn't work.
The concept of having uninterrupted exponential growth every single year is a fallacy. Even interest embodies this fallacy. The lender demands exponential interest, even if the economy isn't growing this year, so people must default on their loans.
Really, all the exponential growth is at best logistic growth with an unknown upper bound. That upper bound can increase over time as society develops but it doesn't grow exponentially. It's more like the exponential phase of logistic growth is an attempt to get to the upper bound as quickly as possible.
> But mankind, as a whole, is a clippy optimizer already, albeit a manual one. Right now it is us that is destroying life around us, replacing rainforest with cattle farms and suburban developments. We are already working on farming robots to serve our goals. Does the lizard care if it is crushed by a farming robot's wheel or an AI tank's chain?
We already have a built-in solution for this problem: biophilia. People like nature. Once we stop being poor, preserving nature, living in natural environments with trees and animals becomes a high priority. The people "cutting down the rain forest" will stop once they're sufficiently comfortable to prioritize nature over avoiding privation just like the Europeans stopped.
We used to built boats out of wood, now we build them out of metal. There's no reason we can't replace technologies that conflict with our biophilia with those that don't.
Sufficiently comfortable isn't a moving target. People have fixed needs. Sure, we desire to go beyond those needs, but we also desire a living, healthy environment with trees and animals. It's only when we don't meet our fixed needs that we deprioritize our biophilia.
To make this very clear: rich people usually want a big house but they also want a big yard with nice plants. Many of them want it to be somewhat overgrown with "wildflower" beds, native bushes, honey bees and so on.
This process is working itself out right now for fossil fuels. As we become aware of the damage technologies do, we rapidly transition to technologies that do less damage. This change takes generations but it's clearly underway. We also can't expect people who are experiencing privation to prioritize their biophilia over their survival. There's of course a danger in not transitioning fast enough. There's also a danger in transitioning too fast and killing lots of people in the process. Seems to me we're doing fairly well.
I'll never forget the dread, or the beautiful simplicity of the Borg. As a child, few things in literature impacted me as much as that idea. It's so plausible that they likely already exist. They solved all of the problems that biological entities need to solve in order to colonize the cosmos. It was terrifying, but as I grew older I realized that thier loss of humanity was a feature of evolution. Horrifying and oddly comforting at the same time.
For a while, the Borg represented my idea of absolute horror - unstoppable, zero goals in common with us, they never negotiate because they're so different. They seriously nerfed them later though, so they kinda lost that vibe.
It's an interesting thought experiment to imagine if the Borg has optimized for aesthetic appearance. Which is to say, if the TNG show runners had made them pretty.
Their hybrid organic/mechanical appearance is obviously calibrated by the show designers to unsettle viewers and produce a negative reaction. See: David Cronenberg
But how would we (TNG viewers) have felt about the Borg if they looked like us... but were just better?
(Voyager sidestepped this by making a pretty Borg explicitly not-Borg. But still had some more interesting exploration of the right- or wrong-ness of Borg philosophy)
The crazy thing about the Borg is people would absolutely sign up for assimilation if it were consensual. The perfect peace and tranquility of abandoning your self to a vast collective, having your entire civilization archived, essential posthuman eternal life (depending on whether the "Borg" really exist as data, just using drones as necessary physical interfaces, which is my own headcanon but I don't think actual canon.) People would absolutely give up individuality for that.
For a species with vast collective intelligence and access to data on countless cultures, they seem to have the worst possible approach to their goals. They assimilate by nanoprobes, after all - they could easily create replicants that resemble a target species and have them blend in, slowly spreading a dormant "infection" that doesn't trigger until it's spread through enough of the population. Just throwing cubes at anything with a heat signature works in a brute force way, but there are more elegant methods.
A more aesthetically pleasing, subtle Borg as a foil to the Federation would be interesting. It frustrates me how a lot of new Trek has been sort of teasing at plot points involving AGI and synthetic life and cybernetics but they never really dive into any of it.
IMHO the early Borg represented a fundamental difference
The Federation: Freedom, independence, self-determination
The Borg: Hegemonic for the sake of it, absolutely opposed to individuality
Even beautiful Borgs would be horrendous, but depending on the context it might discount something from the ways they're different. Borg look like they do because they don't care about their bodies, beauty means nothing to them.
The Federation has very rigid and even conservative stance on body modifications - and of course their whole agenda regarding individual freedom.
Nowadays, my idea of absolute horror is an Outside Context Problem (viz Excession by Banks).
Have you read Vinge's A Fire Upon the Deep? You might enjoy it, if Excession sparked your fancy.
Note: Vinge is more circumspect and less obvious about what's going on, so expects you to infer more than Banks does. Also, would recommend reading Fire without Googling it, as that would spoil one of the nice literary sleights of hand.
A lot of these stories aren't actually about a life smarter than us, but faster - the grey goo or paperclip maximizer doesn't need any particular intelligence to take over the world, and may in fact be shooting itself in the foot by doing so.[0]
But I don't see any irony or "you shouldn't be worried about it, you're already doing it" in there: the concern is themselves being destroyed, not the world being destroyed. It's only irrational if you assume the people concerned are selfless.
[0] Of course, humanity may also be shooting itself in the foot with many of its own initiatives here!
Yeah this is the funny thing - rationalists are very concerned about paperclip maximisers, yet they're also often libertarians, despite the fact that capitalism itself is a paperclip maximiser that optimises for capital accumulation over any other goal, including maintaining its own environment. Selective blindness, I guess - which is ironic for a group that are so interested in cognitive bias avoidance.
When you're trying to sell everyone - including yourself - on your superior rationality, it might be considered a feature and not a bug.
What would be ironic would a solarpunk maximiser AI that forced humans to stop doing that shit because it's stupid and self-harming and there are better alternatives.
And the only real cost would be loss of face and status for less than 1% of the population.
Maybe that's not rational either. Hard to tell until it's been tried.
I can't find it but there's a short story on the super-cognition trope, but oriented along the lines of human cognition. What struck me about it is that ultimately the point of view character is the bad guy for these sorts of reasons.
No, the point is that despite his superintelligence, he's ultimately selfish - what he obsesses over is achieving ever higher organization of knowledge and its implications, rather then wanting or trying to apply his discoveries practically to help other - i.e. he comes up with how to build an anti-gravity device with present day technology, but doesn't think that's very interesting because now discovered he knows it will work and its implementation wouldn't contribute to his greater understanding and organization of knowledge.
The opposition who takes him down is the opposite: after concluding how to build efficient fusion generators, his premise is further thought is not needed until the solution has been implemented - and his plan is to set out to apply his conclusions to uplifting the world.
Of course the opposition isn't "good" per se - he's still going to use the ability to out-think everyone to accomplish some measure of enslavement, but the plan at least is to try and improve the world for all humankind.
In a practical sense to the rest of us of course, the future reality was going to be alien cognizances subjugating the world in one way or another.
Selfishness in this context is a balance between different forms of self-identification. e.g. national identity is a new invention designed to foster cooperation over previous bounds of identification. If it worked once, why not try it again...?
Lack of cooperation on Earth is a bloody affair for the constrained resources, so an intergalactic campaign under the same assumption may lead to advanced alien cognition desiring to expand our sense of self. I.e. if multiple timelike dimensions exist then our survival mechanism is maladapted to that topology. If you discover this constraint expansion on your own, it is freedom. If you are told, it is subjugation. ...or so I muse.
And on other end you have 40k AI. That rightfully decide that overall best actions is to get rid off humans... Purely from utilitarian perspective against Chaos.
The other day my SO and I were pondering a story wherein aliens visit, and essentially say “oh hey you sure have a bad case of capitalism! We had one too, here’s how to break it, we’ve helped about a hundred planets get though it now.” Same idea. Wish I had the faintest idea what strategies they’d present to make that happen.
Well, the reason we can't really experiment with other options is because very wealthy people use the US to kill (often literally) alternative approaches. The 70s and 80s were full of US black ops to collapse socialist governments in South America.
Capitalism doesn’t set any goals, people do that. It just recognises and guarantees individual ownership rights and economic freedoms, but what people do with that is up to them. It’s just that achieving any material goal takes capital, whatever your economic system. Elon Musk is as capitalist as you can get, but his goal isn’t to accumulate as much capital as possible, it’s to build a civilisation. Bill Gates is a capitalist and is busily giving away as much capital as efficiently as he can.
This is of course all about goal setting. I think those fearful of maximisers are being rational, but if we’re going to build AIs then we need to make them part of our community. The things that stop us as individuals going off the rails and taking everyone else around us down with us is respect for community, for others rights and interests. The need to seek consensus and consent. Figuring out how to implement that is the real challenge.
I would urge you to take a closer look at Elon, Gates, or any other billionaire who pledges to have any goal other than "accumulate more."
For example, Elon's personal spending and the structure of his compensation. If he just wants to go to mars, why does the stock price and his share thereof matter so dang much?
Or Bill's net worth, which has curiously increased despite his mission of giving it all away.
The stock price matters because it affects SpaceX ability to raise capital, and going to Mars takes a lot of capital. His share of the stock matters so he can retain control, so that he can make sure SpaceX remains focused on going to Mars, as against anything else. If it was just about making money for himself he'd float the company.
Capitalism is a system that rewards capital accumulation, the richer you are, the more power and freedom you have. Therefore, it incentivises capitalistic behaviour. People can break with those incentives, but I'm not critiquing people's ability to have good goals - I'm critiquing capitalism.
Capitalism does optimise for property rights, that's true - but if you're taking a utilitarian perspective (as all good rationalists should) then you have to ask what the material outcomes of that are. What we see is accumulation of massive power in the hands of a few, many of whom are doing active harm to the planet (just look at oil billionaires' behaviour). Supporting them is a worldwide underclass of people working in sweatshop conditions or actual human slavery. Maybe optimising for private property isn't providing very good outcomes.
What we actually see is broad wealth generation that is relatively widely spread (albeit not perfectly uniformly spread) lifting billions out of the crushing malthusian poverty that has been the fate of 99% of humanity, and the creation of social orders in which the rich, while they are rich, have strict limits on their abilities to control the lives of the majority.
That broad wealth generation isn't caused by capitalism, it's caused by technological innovation. Capitalism is just the means by which the rewards are distributed. If we start a company together that increases grain yields, it could either be capitalistic and whoever "owns" the company just pays everyone else wages while keeping all the profit, or it can be cooperative where the rewards are decided democratically and everyone who works there "owns" the company. The latter example is not capitalism, even with the presence of markets!
But as a matter of fact, that's not what happens. What actually happens in the actual real world is:
1. The owner of the company doesn't keep all the profit -- they share out the vast majority of it. (Look at the retained profits vs gross margins of any company you can think of. Look at the wages that you, yes I bet you yourself, are paid).
2. When we don't do it this way, we don't actually get the innovation. The thing that incents people to solve the problems of bringing new products to the public (the very large problems, the problems they can spend the majority of their lives attacking) is the possibility of reward.
1 depends on the industry, some industries are low-margin while others are high-margin. Either way, why don't the people who generated the profit, the workers, get to decide how the profit is divided?
2 isn't true. Not only is most innovation funded by taxes and conducted by public institutions, but it's the market that drives iterative and combinational improvement on new inventions and you can have markets without capitalism. Capitalism is the division of people into capitalists who own the means of production (traditionally factories but nowadays increasingly IP), and workers who work them for a wage. The first step in moving in the right direction, IMO (I'm keen on incrementally and cautiously implementing improvements) would be to ratchet up the number of cooperatives operating within the economy. They're more likely to survive than corporations and they're more empowering for workers.
I'm pretty sure that the way it worked is the vast majority of wealth accumulated with the King and 7-10 people close to the King. Everyone else... was not important to the King. Capitalism is simply an economic system where regular people can own the means of production. It is not about distributing rewards. That is the language of a feudal economy.
A system can distribute rewards just as a person can. If acting in the capitalist economy incentivises specific behaviours, that economy is distributing rewards based on those incentives.
Systems are operated by people, which is why many great ideas on paper devolve into dictatorships, oppression and death in real life. See Ukraine for an example.
Putin's dictatorship is operated by a person, but capitalism is not - it's made of people. Nobody runs the markets, they run by themselves based on people's participation, governed by rules. Invisible hand and all that.
Half my family is Chinese. Expansion of the capitalist sector of the economy has transformed that society and raised hundreds of millions out of grinding poverty and into the middle class, including my wife's family and relatives.
Why did this not happen in India, a country which had many advantages over China starting from 30 years ago? It's because the Indian commercial economy is strangled by socialist inspired bureaucracy and anti-competitive special interests. For all the China claims to be communist, it's commercial sector is cut throat capitalism of a particularly raw form. Much rawer that I would like, Chinese people suffer from some of it's excesses, but nevertheless it's produced an incredibly powerful and dynamic economy.
I don't know whether what you say about India is true, or what socialist-inspired special interests are given that socialism is about removing class distinction and thus special interests, but yeah I agree that Maoist policies were a disaster and market reforms were an improvement. But that's not a surprise to anybody, including Marx - who very clearly said that capitalism was better than dictatorship (or monarchy/feudalism, the only kind of dictatorship prominent at the time). It seems to me that India is just full of corruption given they have both billionaires and crushing poverty at the same time.
My primary political concern is reducing suffering. Capitalism leads to less suffering than dictatorship, but also leads to things like 996 and Chinese factory conditions, and thus should be either moderated or replaced with market socialism.
There are some people who think that humanity should reach its maximum potential as quickly as possible and that we must make sacrifices along the way, even if it means dooming other life forms (including humans) into misery. That misery then acts as a deterrent, nobody wants to be poor because it means homelessness or starvation, so everyone tries their best and works as hard as they can.
There are some people who think that humanity is doing pretty well, they are not against further improvement but they prefer preservation of humanity, even if it means not living up to its theoretical potential because sustainability is inefficient over the short term. The goal should be to make sure that everyone benefits from societies' progress to the extent that they want to benefit, which in theory should eradicate homelessness and starvation as nobody is willingly risking death.
Let's call the first system capitalism and the second one socialism (not necessarily inspired by marx).
There are two obvious problems. Humans are lazy creatures, economists that postulate that human wants and needs are infinite are kind of wrong. There are lots of humans who basically do nothing but watch TV. The stereotypical "welfare parasite" shouldn't even be able to exist as he is obviously going against his own interests of being unable to satisfy his unlimited needs. So the argument that capitalists make, is that lazy humans have to be whipped into shape and motivated. However, if the capitalists go too far and overexert their workforce or destroy the environment, capitalism will become less effective over the long term. An optimal system would be hard working when there is lots of work to do and lazy when there isn't much to do. It can't be stuck in either state or maybe it needs to do both simultaneously.
In my opinion, the next step that we should take is to remove the dependence on economic growth to keep the economy stable. If growth happens, that's great, but we shouldn't force it. We need to make small steps toward the right direction. Nobody on this planet needs a big step, e.g. a Marxist revolution, etc.
I know that some people might subscribe to the latter ideal, but that's not my belief system. I believe that we should continue to innovate and improve our lives, but that the people who do the innovating should see the reward of their efforts, not just their bosses. That's why I'm a (libertarian) socialist. Socialism isn't about making sure everybody has equal resources, it's defined by the phrase "from each according to his ability, to each according to his contribution", as opposed to communism's "... According to his need". Communism is meant to describe a post-scarcity economy,realistically. Socialism is about removing the class division between worker and capitalist that's really it. You can have welfare and whatnot and that's a good thing in terms of social utility, but it's not inherent to socialism. That's more social democracy, which is a reformist ideology where you just tack amendments onto capitalism, rather than a socialist ideology.
I do agree with not wanting insurrection though - the time for that in the West has probably passed. The global South on the other hand, is still oppressed enough to warrant taking up arms.
Who exactly do you have in mind, that is a big proponent of capitalism yet is unaware of its nature? Most X-risk theorists I'm familiar with are relatively unconcerned with capitalism, as its notions of profit and wealth are inherently human-aligned.
Its notions of profit and wealth aren't human-aligned though. That's my concern - rationalist types have an idealistic view of capitalism because they're usually the benefactors of it (being typically wealthy and western).
Every final (non-waste) output of a capitalist system is intended for human consumption. Compost, pet food, sheet metal, etc are all intended to satisfy human preferences. Price signals ensure that only things which humans value are economic to produce.
OK, that's optimising for efficiency, not for human values. If capitalism optimised for human values, we'd all be paid to house the homeless, free slaves, improve conditions in the global South, heal the sick, feed the hungry and so on.
That's not how values work - people go against their values all the time due to incentive structures. I want everybody in the world to be fed, but not at the cost of myself being tortured for a million years. Our goal as a species should be to align our incentives with our preferences so the behaviours that get rewarded are the ones that we want to see.
Capitalism is not anarchism, where there is no state to generate public goods, and libertarians are not unaware of collective action problems and the utility of government in solving them..
Libertarians are in favor minimizing restrictions on voluntary interactions between consenting parties. Most are fully in favor of government intervention to deter private interactions which are not voluntary, like assault, or pollution.
How the commons - which exist outside the purview of private property - are treated, is completely orthogonal to one's belief in libertarianism. For example, you can be a libertarian, and believe no pollutants should be allowed to be emitted into the atmosphere, because the atmosphere is part of the commons, and should be maintained in a way where it remains useful to humanity at large.
Voluntary interactions between consenting parties can still be extremely coercive if there's a power imbalance. In the Roman Empire, people used to sell themselves into slavery to pay off debts.
No, it can't be coercive, by definition. If a jury of your peers deems that you did not provide informed consent, then the contract is void. A contract cannot be deemed both coercive and valid under common-law/libertarian principles.
Coercion is when someone reduces your option set, by threatening violence (against the person or property of you or someone you love). An offer, that you are free to reject, and have the faculty to fully comprehend, cannot meet the definition of coercive.
As for selling yourself into slavery, that calls into question whether the you that exists today has a right to the you that exists in the future to the extent that the current you possesses a right to sell that you's liberty away, and I am fairly a certain a court would rule you don't.
But these extreme/fantastical scenarios are not what libertarians are arguing about. There are far more mundane examples of people's liberty being repressed by anti-libertarian laws, that anti-libertarian apologists cannot excuse, so they resort to these hyperbolic examples.
Your answer avoids the ethical dilemma by means of a technicality about temporal ownership; it's not a very good rebuttal. Also its just one example of how voluntary transactions can be coercive, here's a few more just off the top of my head:
- demanding sex in order to give an actress a part in a movie
- offering food to a starving man in exchange for his kidneys
- buying all the food in the market to drive up the price from people desperate to not die
It's a point about the ethics of exerting control over different versions of ourselves. It's not some procedural technicality.
>demanding sex in order to give an actress a part in a movie
There is nothing coercive about this.
>offering food to a starving man in exchange for his kidneys
Unless it's an emergency, where the starving man does not have the time to make/entertain offers on the wider global market, there is nothing coercive about this either.
In an emergency situation, different rules apply, as a consequence of the physical space we each take, which displaces others. This exclusionary effect creates a duty to render help in an emergency, as opposed to abusing one's position in such a situation to exploit others.
>buying all the food in the market to drive up the price from people desperate to not die
There is nothing coercive about this, and in practice, it's impossible. This should certainly be guarded against by individuals, through means like securing long-term food purchase contracts through co-ops, or by growing one's own garden, but that doesn't imply that it's a form of coercion which we have an ethical right to employ coercion to remedy.
I don't understand what your definition of coercion is, if withholding care upon receipt of payment is not coercive but demanding payment to avoid harm is. They're materially the same thing: you're paying to not die.
You can't really argue with deontological positions given they're axiomatic, so there's probably no reasoning I can give that you would consider.
>>if withholding care upon receipt of payment is not coercive but demanding payment to avoid harm is. They're materially the same thing: you're paying to not die.
Could you elaborate? What exactly do you mean?
By "withholding care upon receipt of payment is not coercive", do you mean a person that commits to providing aid in exchange for payment, and then reneges on their obligation upon receiving payment? This is clearly theft/fraud, and in the same class of behaviors as coercion.
I'm totally confused as I don't know when we discussed such a scenario and why you think that I don't consider that coercive.
I mean that the two cases below are materially the same:
- you get shot and I withhold my care based on a ruinous payment
- I threaten to shoot you unless you give me a ruinous payment
Obviously as you said these are morally different because I am acting aggressively - but this is not a utilitarian argument. In terms of utility (and i am a preference utilitarian, so that's what matters to me - rationalists typically claim the same framework) these two scenarios are identical, and thus have the same moral weight.
The latter, you have claimed as coercive, the former (eg withholding food from a starving man), you said was not. Therefore, your definition of coercion is not rooted in utility.
Like I said, in emergency situations different rules apply, based on the principle of the exclusionary effect I mentioned earlier.
The normal libertarian ethics don't apply in such cases. In the ordinary non-emergency market, Newtonian/libertarian ethics do apply, and those are the circumstances you are avoiding contending with by focusing on extreme outlier/edge cases.
I think these outliers demonstrate an inherent flaw in capitalism (among others) where power imbalances dictate capital accumulation. The rich get richer and the poor get poorer because the rich are in a better negotiating position so the poor must take what they're given. The obvious example being that workers must sell their labour while capitalists can buy labour and retain profit, leading to capital growth. Marx documented the tendencies of capitalism pretty well to be honest.
You don't seem to be engaging with the implications of what I'm saying, rather just engaging with the specific examples, so I'm not sure you're really willing to engage with the arguments.
>>I think these outliers demonstrate an inherent flaw in capitalism (among others) where power imbalances dictate capital accumulation.
These outlier situations have properties not present in normal circumstances, regardless of the power imbalance in those normal circumstances. Normal circumstances mean that Apple, despite being worth $2.8 trillion, can only induce me to purchase its product, if it is the best option for me amongst the entire set of options globally available to me. The harmful/coercive effect of the power imbalance vanishes in a free market, because by the ruleset of the free market, Apple cannot use its power to do anything except provide a better quality product for a lower price.
>> The rich get richer and the poor get poorer because the rich are in a better negotiating position so the poor must take what they're given.
The poor don't get poorer in a free market.. We've seen wages grow over 20 fold since 200 years ago, in 1822. This is after adjusting for inflation, meaning a greater than 20 times material improvement in people's living conditions. The places in the world with the highest wages are precisely those which adhered to the free market ruleset most closely, and for the longest duration of time.
>>You don't seem to be engaging with the implications of what I'm saying
Yes, apple does not have the power to not literally force you to do anything, which is an improvement over feudalism. However, if you take apple alongside all their inputs, what you have is an extractive model where shareholders capture the value created by employees and suppliers, and the employees of those suppliers are often the ones being ruthlessly exploited (the aforementioned cobalt miners, Chinese factory workers, etc). Nowadays the truly horrific exploitation is done abroad. The west barely does any material labour like resource extraction or manufacturing, we just shuffle paper around and live off capital extraction and admimistration.
> The poor don't get poorer in a free market..
Real wages in the USA have stagnated since the 1960s [1] as neoliberal (aka libertarian) policies have been implemented. The US has some of the worst rates of education, medical expense, and income inequality in the world. Many cities are unaffordable to the service workers and labourers that those cities require. At the same time, the value of real work being done is extracted from countries in the global South by neocolonial policies. That's not the kind of ethical ideal we should be aiming at.
>>Yes, apple does not have the power to not literally force you to do anything, which is an improvement over feudalism. However, if you take apple alongside all their inputs, what you have is an extractive model where shareholders capture the value created by employees and suppliers, and the employees of those suppliers are often the ones being ruthlessly exploited (the aforementioned cobalt miners, Chinese factory workers, etc).
The activity of Apple is massively beneficial to China, and the other suppliers downstream of it. You're simply accepting the Marxist principle that profit is exploitation, and harmful to society, which is not economically valid, and oblivious to the facts around the impact of foreign investment in countries like China.
>>Real wages in the USA have stagnated since the 1960s [1] as neoliberal (aka libertarian) policies have been implemented.
This is incredibly misinformed.
First of all, that link looks at wages, not compensation. The non-wage component of compensation has grown more rapidly than the wage component of compensation. Looking solely at wages, instead of compensation as a whole, is seemingly a deliberate choice to give people a sense of being victimized.
Second, the USA is NOT the world, and it has NOT implemented libertarian policies since 1960.
Market rights have been increasingly encroached upon since 1960, as a result of politically instituted government interventions, that according to basic economics, reduce productivity growth, exacerbate income inequality, or both.
To say nothing of the rapid expansion of the regulatory bureaucracy, and the barriers to competition it creates for major private sector incumbents, and highly paid consultants/lawyers/lobbyists who know how to navigate it.
2. The rapid increase in land-use restrictions in the major high-productivity population centers, which according to many analyses, has massively increased income inequality, while inhibiting productivity growth. E.g. this study estimates that reducing land-use restrictions in just three cities: New York, San Francisco and San Jose would increase GDP by 36.3 percent:
The world at large, which has moved more toward libertarianism since 1990, has seen the most rapid rise in wage growth in human history over the last 30 years:
I actually don't care about profit, I care about how it's distributed - it should be given to the people who generated it. That's not how capitalism works.
You can't honestly say that the Reagan era led to increases in regulation?
I don't support regulations uncritically, I think rent control plus NIMBYism is a disaster. You'd ideally want to build more accommodation, preferably apartment blocks over single family.
I'll admit I'm not a policy wonk so I don't know how non wage compensation plays into it. I do know that the American healthcare system is fucked and should be transitioned to a single payer system.
Your other comment about about cooperatives and labour rights was just the perfect market fallacy and completely a historical, respectively, so doesn't merit further analysis.
I've given so many examples of people being victimised in this thread, you just keep restating your opinions and moving on when I show my point so it's pretty clear you're unmovable in your ideology so there's no point continuing with this. I didn't engage with this thread to debate with libertarians when my established premise was that libertarians never move from their positions.
Like I said, your explanation for that not being how capitalism works, which was:
>>However, if you take apple alongside all their inputs, what you have is an extractive model where shareholders capture the value created by employees and suppliers, and the employees of those suppliers are often the ones being ruthlessly exploited (the aforementioned cobalt miners, Chinese factory workers, etc).
Is straight Marxist exploitation theory, and it's not grounded in the scientific consensus on Economics. It discounts the value of saving capital, and risking it on a venture, that the investor provides. It assumes wage earners do not benefit from profit-motivated investments, unless they share in the profits of the enterprise they are employed in, which is totally contradicted by basic economic theory, and a mountain of evidence of the last two hundred years.
>>You can't honestly say that the Reagan era led to increases in regulation?
The Reagan era experienced a quantifiable increase in regulations:
>>Supporters say the growing number of administrators is needed to keep pace with the drastic changes in healthcare delivery during that timeframe, particularly change driven by technology and by ever-more-complex regulations. (To cite just a few industry-disrupting regulations, consider the Prospective Payment System of 1983 [1]; the Health Insurance Portability & Accountability Act of 1996 [2]; and the Health Information Technology for Economic and Clinical Act of 2009. [3])
Beyond federal regulations, state and local regulations increased as well. The land-use study I linked to earlier explains how much land-use restrictions, enacted at the local government level, have increased since 1960, and how much this has contributed to income inequality and productivity growth stagnation.
Those who benefit from centralized government control over society, e.g. the public sector unions and a fully unionized mainstream media, have completely taken over the narrative, and have convinced the masses that what the US experienced since 1960 was a move toward libertarianism.. It's an outrageous piece of misinformation/swindling-of-the-public.
>>Your other comment about about cooperatives and labour rights was just the perfect market fallacy and completely a historical, respectively, so doesn't merit further analysis.
Ask the millions starving under communism then improving their lot under capitalism which system they prefer.
Debating the various theoretical aspects of various *isms is fun and all but there is practically no immigration from capitalism to communism. The poor know which one is better for them and vote with their feet.
You realise that even Lenin, piece of human filth that he was, didn't consider the USSR to be communism? That's just not what the word means. The USSR was (pick one):
- A Marxist-Leninist state
- State capitalism
- Bolshevism
I don't care much about the terminology, but I do care about people making out that most left-wingers want more USSR. I mean, I'm an anarchist, so if I were alive in the USSR I'd be murdered, so that should tell you the level of friendliness between those people and libertarian socialists.
Any society where Marxist doctrines came to predominate ended up suffering for it. We can quibble over what you call the USSR, but Marxism is simply a-economical, and causes unimaginable human suffering to the extent that laws are shaped by it.
You can literally just switch out corporations with cooperatives to make a market socialist economy. Marxist "doctrines" also led to 5 day 8 hour working weeks, paid leave, unions led to increased wages, and many other workers rights. Hell, before the left wing gained power, child labour was predominant in capitalist economies.
If cooperatives were as efficient as corporations, they would win out in the free market, with consumers choosing their products over those produced by corporations. Repressing the free market necessarily means preventing the most efficient means of production/coordination from becoming predominant, based on crude ideologically motivated narratives about how an economy is supposed to work.
>>Marxist "doctrines" also led to 5 day 8 hour working weeks, paid leave, unions led to increased wages, and many other workers rights. Hell, before the left wing gained power, child labour was predominant in capitalist economies.
No it didn't. All of those things emerged as a result of the rising levels of per capita GDP that were due to profit-motivated investment in a free market.
Child labor could only be abandoned when productivity increases enabled parents to support their whole family alone, and that productivity increase was, again, a consequence of profit-motivated investment in a free market.
>Coercion is when someone reduces your option set, by threatening violence (against the person or property of you or someone you love). An offer, that you are free to reject, and have the faculty to fully comprehend, cannot meet the definition of coercive.
In an economy that defines itself by a high degree of the division of labor, where people no longer gather their own food and are dependent on the cooperation of other people, being excluded from the division of labor is the same as ending their ability to participate in society. They must now scramble and get their own plot of land and work the fields themselves and become self sufficient. There are offers that people cannot refuse realistically.
I would count this as coercion because people who own money can refuse to spend it indefinitely. As money is needed to conduct transactions, it becomes harder and harder to trade the more people keep money for the sake of having money instead of using money exclusively as a medium of exchange as many economists (even Marx) postulate, even though it has been proven wrong hundreds of times.
A store of value cannot be used as a medium of exchange, those are two mutually incompatible and this conflict makes the money -> goods transaction inherently coercive, which ironically also backfires, because the only way you can beat coercion is through psychological manipulation. How do you convince someone to get something they don't need? By creating an artificial need.
>>In an economy that defines itself by a high degree of the division of labor, where people no longer gather their own food and are dependent on the cooperation of other people, being excluded from the division of labor is the same as ending their ability to participate in society.
No one has the power to exclude you from the market and the benefits of division of labor.. You would only be excluded if what you provide is of such little apparent value, that no one wants to trade what they have for what you provide. That is not coercion. People have a right to not associate with you, even if that choice means you starve to death. You do not own other people.
>>I would count this as coercion because people who own money can refuse to spend it indefinitely.
People can barter. People can create their own money. When the state, under anti-libertarian principles, intervenes, and prohibits voluntary exchanges involving certain kinds of money then yes, these problems can theoretically emerge. This is why libertarian philosophy is so adament about not using the state's apparatus of violence (the courts, police and prisons which compel compliance with state orders) to restrict people from engaging in voluntary interactions with other consenting adults.
I have to mention that the silver/gold standard was introduced by many authoritarian rulers because they control the gold/silver mines. This includes the Roman empire. As the supply of precious is limited it concentrates at the top and this was very desirable for monarchies.
The only way to get gold is to dig it out (mines owned by monarchs), borrow it with interest (which leads to the usual debt trap) or to sell your products and services to those who already have gold (which is heavily concentrated geographically and in the hands of the rich).
If it is mandated that you must use a certain currency, then this automatically leads to a power imbalance between those who have money and those who don't. If you need to buy food or pay for shelter then you can't wait, but the rich have all their basic needs met, they can afford to wait until you are desperate to accept any offer. This is why Austrian economists are kind of right when it comes to competition among currencies. A terrible currency can ruin an economy. However, they still insist on the gold standard because they love playing the monarchy game, they even admit it themselves.
Even in modern times the rich benefit from economic crashes because it means desperation and fire sales, in other words, discounts for those who already have more than they need for themselves.
Very true, and this is why right-libertarian ideas for increasing liberty are doomed to fail: all they ever do is change who's in charge, they never really question the system of putting people in power over each other.
> How the commons - which exist outside the purview of private property
Ah, but that's the crux, isn't it? Because even the implication of this sentence is that that nature of "private property" and "the commons" is socially constructed - what can be private, what cannot be. If it is socially constructed (as it clearly is), then the scope of private property becomes a matter for public and political debate, and really with no limits. Should land ever be private property? Should resources in the ground ever be private property? Water rights? Should anyone be able to own a home?
Libertarians (and many other people) may feel that the answers to such questions are either (a) obvious and/or (b) dictated by the rest of their political philosophy, but the moment you open the door to the possibility that there are some things that remain "in the commons" and other things that do not, you are forced to face the social construction of private property as a concept, and with it the question of what it should really be.
>>Ah, but that's the crux, isn't it? Because even the implication of this sentence is that that nature of "private property" and "the commons" is socially constructed - what can be private, what cannot be.
It's socially interpreted, but I believe there is a natural justice, that promotes functional societies, that reasonable people can recognize, that will end up as the social interpretation via any process that is impartial and deliberative, like courts.
Natural resources are not man-made, and thus are not true private property. But we recognize a right to appropriate a small portion of abundant natural resources for our own use, when one does this to reconfigure the appropriated matter into a much more valuable form, whether it's reconfiguring an apple into our flesh through digestive/metabolic processes, or a log and stone into a hand axe. We recognize the natural justice of allowing the person to have perpetual exclusive rights to this value-added portion of the matter of the universe, in order so that they may benefit from the value they created.
>We recognize the natural justice of allowing the person to have perpetual exclusive rights
I completely disagree. Land ownership should never be perpetual. It should be time limited. Old people own all the land, while young people must buy it from them. That is naturally unjust and forces people to use violence to obtain land. The only sliver natural justice remaining here is the justice of killing "oppressors". If people do not step down from their positions of power voluntarily, then people will force them to step down.
I obviously do not want this to happen so I will argue against anyone who wants earth to be littered with wars over land.
Wars do not break out because people are evil, wars break out because our laws and rules during peace time aren't good enough for everyone. If land is properly allocated to everyone, then borders become irrelevant goals for war. The difference between whether you trade to get indirect access to land or whether you own it should be so slim, that no one ever considers obtaining land forcibly be ever worth it because the peaceful methods work just fine.
>>I completely disagree. Land ownership should never be perpetual. It should be time limited. Old people own all the land, while young people must buy it from them. That is naturally unjust and forces people to use violence to obtain land. The only sliver natural justice remaining here is the justice of killing "oppressors". If people do not step down from their positions of power voluntarily, then people will force them to step down.
Land ownership does not fit the definition I provided, which in greater context, is:
"a right to appropriate a small portion of abundant natural resources for our own use, when one does this to reconfigure the appropriated matter into a much more valuable form, whether it's reconfiguring an apple into our flesh through digestive/metabolic processes, or a log and stone into a hand axe."
Land is not scarce, and appropriating it is not a case of appropriating only that portion of the matter of the universe which one reconfigures into a value-added form.
So I agree, there can be no natural right to private ownership of land. A land tax, therefore, can be justified, as can other forms of state-intervention over how one may manage it.
> Land ownership should never be perpetual. It should be time limited.
Ownership of anything by a person is already time limited - limited to a human lifetime.
> If land is properly allocated to everyone
Let’s imagine every person in the world should have an equal holding of arable land (ignores lots of other factors, but whatever!). Assuming 8 billion people and 1.6 billion hectares of arable land, works out to 5 people per hectare.
To be fair, USA needs to a double its population (≈ 2.3 times to 760 million). New Zealand needs to approximately halve it’s population.
For other countries, find your arable land figure (it is in Hectares, left most column of main table is 2016 figure) from https://worldpopulationreview.com/country-rankings/arable-la... and multiple by 5 to get the fair population and compare that with the actual population.
This sort of approach breaks down as soon as any non-renewable resource, notably land, is involved. You would likely consider a house you have just built on a piece of land as value-added with respect to the land; other people may regard your work as having negative value with respect to the land itself, which is a natural resource, and thus not true private property. And even regardless of the value judgement, because the land is a limited resource, your control over it is necessarily exclusive to others.
The same is true of your use of metal to forge an axe. While that may appear to be value-added to you, others may not share your assessment. However, the metal you used has now been removed from its natural condition (presumably in-ground ore) and is no longer available for anyone else to bring their own value-added use to, and thus your action has the potential to be a net negative to the rest of the population. It's not clear what rationale they might have for respecting your claim to have added value.
There's no clear reason for land ownership given the notion that you should only own value added, but libertarian conceptions of society would be fairly lost without such ownership. Nevertheless, for most of human history, the concept that anyone could own land in perpetuity would have been quite strange. Likewise, the concept that one could has also been the source of many of the most bitter conflicts in human history. This doesn't make the idea of owning land particularly compelling, when looked at over the long arc of human history.
>>This sort of approach breaks down as soon as any non-renewable resource, notably land, is involved. You would likely consider a house you have just built on a piece of land as value-added with respect to the land; other people may regard your work as having negative value with respect to the land itself, which is a natural resource, and thus not true private property. And even regardless of the value judgement, because the land is a limited resource, your control over it is necessarily exclusive to others.
It doesn't break down. I consider land to be under the proper purview of the state, under exactly this principle.
>>The same is true of your use of metal to forge an axe. While that may appear to be value-added to you, others may not share your assessment. However, the metal you used has now been removed from its natural condition (presumably in-ground ore) and is no longer available for anyone else to bring their own value-added use to, and thus your action has the potential to be a net negative to the rest of the population. It's not clear what rationale they might have for respecting your claim to have added value.
Not really. The metal you used is so little that it can't have an appreciable negative impact on others. And the state can in fact charge you for taking that metal out of circulation, and redistribute the proceeds to compensate the rest of society for your extraction of it. But it is not reasonable for the state to simply never allow you to appropriate metal, under any conditions, or demand a right to anything you fashion from that metal, after it has already charged you a fee for your appropriation of it.
Now of course, if you go searching for edge cases, you will find them. A coherent moral philosophy doesn't eliminate the complexity of the world, and the challenge of navigating it. But it does give us a baseline moral philosophy that we can all agree to, and then together try to govern in accordance to.
>>There's no clear reason for land ownership given the notion that you should only own value added, but libertarian conceptions of society would be fairly lost without such ownership.
A land tax is perfectly consistent with both land ownership being an artificial right, bestowed by the state, and libertarianism.
It is also, intriguingly, the perfect tax, from an efficiency standpoint, according to economists:
There is no rule in capitalism that says you have to optimize for capital accumulation over any other goal. There is not even an evaluation function, people can choose their own values.
If there is one thing that Marx was right about it is that once the accumulation of capital ends, capitalism itself breaks down.
Just listen to all the people kicking and screaming that interest rates should be increased or that 0% interest rates are wrong and negative interest rates (which represent decentralization of capital) shouldn't even exist.
There are consumer protection offices advocating that accounts shouldn't be charged negative interest rates even though it only applies to accounts with more than 100k€ in checking accounts. Imagine the horror of getting a certificate of deposit for one whole month. Instead, some banks have decided to just charge monthly fees which is incredibly regressive because you would need 25k€ in your account to exceed the monthly fee at -0.5%. Meanwhile if you have millions on your account you pay the same fee, ah yes, subsidize the rich.
That would be true in any system, for example strong people have more power than weak people, intelligent people would have more "freedom of thought" than stupid people, ruling politicians would have more power, and so on. In capitalism however, you only have power if you give people something they want. In socialism for example, being "rich" would just mean being a part of the ruling elite, but you would still have people who are "rich" and have more power than others.
And nothing in capitalism says you have to value buying a yacht higher than anything else. There is also no rule that says capitalism should have no laws whatsoever.
Power is also a good thing, for example being able to cure cancer or feed people is a power.
Natural power imbalances are fine, I don't advocate for the world of Harrison Bergeron. Ruling politicians and wealthy people are not natural power imbalances.
Power is something you have over another person, ie the ability to violate their preferences. That's an inherent negative. Presumably the power to cure cancer is something the patient would consent to, and thus not the kind of power I'm talking about.
Capitalism has led to the kind of voluntary transactions where child slaves are paid just enough to not starve so that mining companies can extract cobalt to sell to electronics manufacturers. While the idea of voluntary transactions sounds nice in the small, it plays out pretty badly writ large.
Most of what you say is just leftist framing, like power being a bad thing, and mining companies paying minimum pay to make people dependent. That is completely backwards. You pay people because you need something they can give you (their work force, or something they own or make or whatever). If wages are low it is because of supply and demand. If you demand people should be paid more, their jobs might go away completely and they would be poorer than before.
As for wealthy people being not natural, as I said, usually you become wealthy by giving people something they want or need. That is the best thing to reward in a society.
What do you mean by "natural power imbalances", like somebody having more muscles than others? Then why can't you imagine that some are better businessmen than others, too? Was Steve Jobs just an evil guy becoming rich by exploiting his employees, or did he become rich because he made the world a better place for billions of people? Would the world have been better if Steve Jobs had been shot down at an early stage to give somebody else a "fair chance" in his place?
Ugh, I'm getting real tired of this thread, having to explain left wing arguments that are freely available to read. Just, if you want to understand what I'm saying, at least give Das Kapital a read. It lays out some of the basic arguments.
I haven't run out of arguments, just time to constantly refute libertarian ones. Like thinking the USSR and Maoist China were somehow good examples of Marxism when all their ideas were taken from Lenin, but whatever - I feel my point about libertarians was sufficiently proven.
Yeah, "but it wasn't real Socialism/Communism/Marxism" comes up as an excuse by leftists all the time. Of course it wasn't - because "real Socialism/Communism/Marxism" is impossible.
Lenin himself said the USSR wasn't socialist or communist, it was state capitalist. Here's the idea behind Leninism: a vanguard party (the communist party) takes over the state through revolutionary means, and directs the evolution of the economy, through state capitalism, to socialism. Russia was a feudal state when the October revolution happened, and Marx's postulation is that just as advanced feudalism gave way to capitalism, so too would advanced capitalism give way to socialism. Thus, Lenin's plan was to advance industrial capitalism as rapidly as possible in order to bring about the material conditions for socialism. That's why for example, Mao was obsessed with getting farmers to build backyard furnaces instead of actually farming crops, causing millions of deaths - because he was focused on industrialising the country. Like an idiot.
Now, that was never going to happen, because of the iron law of bureaucracy - states only ever consolidate their own power so Lenin's plan was doomed from the start. But the USSR was never socialist.
I'm not sure what you think is impossible about socialism, because if we were to legally replace corporations with cooperatives, which is not a huge jump in terms of economic law, that would be market socialism. Its happened before (briefly in the USSR before the consolidation of the bolsheviks in power for example, for example in the meaning of the word "soviet" itself) and cooperatives work perfectly well as an industrial unit.
Sorry nothing about the soviet union was capitalist. It was a planning economy, and people were not allowed to make their own fortune. Industrialization was needed either way to give people a modern lifestyle.
Coops based on voluntary participation are entirely compatible with capitalism, you can already set up a coop today in capitalist countries. Socialism is not voluntary, it forces everybody to participate. It is the most exploitative system, because everybody is supposed to only work for the common good, not for themselves. It also sets the wrong incentives which is why it usually ends in misery.
Reading this, I realize I’m more scared of other intelligent entities (AI/Aliens) turning us to pets or putting us in some zoo/cages, than I am about them killing us.
The more likely AGI risk is indifference in pursuit of optimizing a reward function with values not aligned with us.
Like the anthill destroyed by the construction of a dam. The humans don’t hate the ants, aren’t putting them in zoos, they just don’t think of them in pursuit of their goals. In the AGI case it’s even possible for those goals to be “dumb” or pointless if we mess up the reward function or some unanticipated thing fits it (paper clips).
The risk with a self-improving AGI is that it is way smarter than us and it can think way faster. In my experience people have a hard time really visualizing this.
Imagine a human trying to trick a chimp (or an ant) into doing something so the human can pursue its goal. The gap between a human and an AGI would be even larger than that.
Imagine scaling up a regular human brain’s operations per second a billion times so it’s the same architecture but thinks that much faster (no longer constrained by bio energy or head size). Humans wouldn’t be able to respond because to the machine it’d be nearly like interacting with a static object, it’d be trivially able to think/act around you.
That’s why the research around AGI is about the goal alignment problem. Which is hard because humans are not perfectly aligned either. It’s also why “just unplug it” is dumb.
The optimistic outcome if the alignment problem is solved is enormous intellectual capacity to solve any problem we might have.
Claims about AGI without hard scientific backing make you come across as pseudo-intellectual, not smart. If you have actual evidence then articulate it or just don't bother replying at all.
Claims made without evidence can be dismissed without evidence.
If AGI is possible, then the described concerns about risk hold up.
Either you think AGI is impossible for some reason or you think it is possible, but the goal alignment issues don’t matter. You haven’t given any explanation for either. Maybe you think something else? I wouldn’t know because you don’t say anything.
Obviously there isn’t “hard scientific backing” since AGI doesn’t exist yet. There are still unknowns about what would be required. That doesn’t mean it’s impossible to think about what its existence could mean.
It was possible to think about the consequences of flight before airplanes existed.
I think AGI is possible because we’re surrounded by general intelligence and there’s nothing magical about brains or biology. It’s just the timeframe that’s unclear.
We already see issues with poorly optimized reward functions on a smaller scale, both in ML models (classifying based on unintended things) and in humans (drug addiction).
No - it logically follows and you can begin work on the problem now, but it's obvious you refuse engage in anything other than one line dismissals and condescending snark so there's no point in me replying further.
Logic only produces correct results when your starting assumptions are correct. Since you're just making things up, and logic resting on that foundation is pointless.
Ted Chiang believes it is AIs that have much to fear from us, not the other way around. He points out that to create intelligence, we will need to create something that is capable of suffering. He asks us to consider how little consideration we give to the billions of animals trapped in our system of industrial farming - animals we know are intelligent, feel emotions and are clearly capable of suffering - and then asks if we think we will treat computer programs with more consideration than that. Clearly not!
The correct alternative path is to just not do it. Let’s not attempt to create intelligent beings. Given the suffering we would cause, doing so is profoundly unethical.
While I have great compassion for anti-nativism, in the past few decades we have made immense sociological progress. If it is possible to create life that is incapable of shame or suffering and yet capable of enjoying its existence for its own sake, we should try to find out about it before turning the lights off.
I always thought that people are afraid of AGI for the very reason of knowing how they treat beings of lesser intelligence, hence the popularity of movies like the matrix - because that is what we'd (presumably) do if we were in their position.
Yes but they get to abuse their own children so we are "even".
Please don't take this seriously. A lot of hazing culture is based around the fact that you got bullied and now you get your revenge by bullying someone else. We shouldn't perpetuate that.
Any alien with the ability to physically reach earth either via FTL or other dimensions, etc. is going to harvest our planet/solar-system for resources, they aren't going to give a damn about us.
Not unlike maybe Star Trek Discovery latest season but on a lesser scale/tech.
I never understand people over-evaluating our interest to something else exponentially more advanced. We wouldn't be like farm animals for labor, their tech/machines would be exponentially more efficient.
This is also why I think UFO believers are not rational. Why would they even care, we'd be annoying at best.
It seems to me that we are the resources of this planet. If you just desired materials, any number of objects in space would be better suited. Unless it's the goldilocks zone of environments that you are after, a habitable planet might be harder to come by.
Haha! So true. But even if we aren’t as intelligent as them, we could be “unique”. Someone would probably want a collection of this “cute” but less intelligent humans/animals?
Life is a game of reward optimizations, and much of what humanity does seems to be attempting to "game" the rules; get ahead at all costs, ignore the affect on the long-term survival of the species. The clippy thought experiment asks the question: "Can we change our own reward?"
The actions you point out (environmental destruction, short-sighted carbon policies) optimize very clearly for an individualistic reward; it's in the best economical interest of a few to chop down forest / continue to burn coal / take your pick. And just as clippy can't seem to stop building paperclips while the world burns, we can't seem to stop optimizing for money/sex/domination while our future darkens.
But if you shift the reward function -- and it's not that huge of a shift -- from "win at life" to "long term survival of the species as a whole", these actions are very clearly unproductive.
When people misplace something, the ones who are comfortable with the idea of stealing are some of the first to accuse others of having stolen it.
With AI we are both projecting and realizing that software reflects us, so some of our worst traits will show up in this “offspring”. So while it may be overwrought, it’s not unfounded.
I came to the same conclusion after reading Ted Kaczynski's books. We are already in the nightmare scenario of the genie of technology granting us our wish and we know how those stories end.
I want to add a disclaimer, Ted is sadly a psychopath and what he did was inexcusable.
This reminds me of a story idea that I had. Thanks for that.
EDIT:
This guys is clearly a weird dude, which is both a critique and a praise. Most of what makes the world interesting and worth living in is made by people who are weird in some way.
I don't know that I agree with all of his positions - there's a large (and I mean large) compendium on what he calls spaced repetition (https://www.gwern.net/Spaced-repetition) the main thesis of which appears to be that repetition helps learning but only if it's spaced out over time. I don't know - this is an area where I'm out of my depth, but it appears like this isn't his area of expertise (computer science and writing), and he's trying to put together a thesis by dint of just writing a lot. Which I don't know is best practice or peer reviewed - huge amounts of science articles, even peer reviewed ones, are non-reproducible, so even that is flawed. Much of his writing appears to be in this style of long, obsessively researched posts. Both good and bad - it's certainly different than the "No You!" of most social media - I'm looking at you Tik-Tok.
What I do like about the site is his use of deep linking. Personal websites where people put in the effort to make something new that's not just the same old social media paradigm are welcome additions to the internet and I think that they should be featured more on hackernews. It's too bad that this was made in Haskell. I don't know that using stacked Iframes like this would work in javascript without causing an overuse of memory. Would someone more knowledgeable about this than I chime in?
His writing credentials themselves look super impressive. Check this out:
I am a freelance American writer & researcher. (To make ends meet, I have a Patreon , benefit from Bitcoin appreciation thanks to some old coins, and live frugally.) I have worked for, published in, or consulted for: Wired (2015), MIRI / SIAI2 (2012–2013), CFAR (2012), GiveWell (2017), the FBI (2016), Cool Tools (2013), Quantimodo (2013), New World Encyclopedia (2006), Bitcoin Weekly (2011), Mobify (2013–2014), Bellroy (2013–2014), Dominic Frisby (2014), and private clients (2009-); everything on Gwern.net should be considered my own viewpoint or writing unless otherwise specified by a representative or publication. I am currently not accepting new commissions.
In sum, the guy looks weird, but definitely worth reading if only you take what he's saying with a grain of salt.
It replaces uncountable diversity of unique life with a cow. That's a huge loss that can't be recovered for millions of years - as we're quickly erasing entire species.
On other end should we also work towards increasing biodiversity in locations where it has already been decreased? Let's say destroy suburbs, promote growth there, move all people living to most optimal space usage. Let's say 2-4 people per 3 m^2 rooms. Shared bathrooms and so on? Would allow us to claim back lot of land and start reclaiming it for nature.
You solve the food problem one way. Your neighbors solve it a different way. Vines, still another way.
If your neighbor cuts down the forest to raise some cows, or a vine climbs up a tree and chokes it out to access more sunlight, where’s the right or wrong there?
Because as I already noted, there are two distinct problems: food production, and food distribution.
We already know how to produce enough food, new efforts to increase food production (other than coupled to some combination of local/global population change) are unnecessary (0). Therefore, destroying ecosystems and/or species to solve a a solved problem is at best less morally defensible.
The actions you've described are about food production, and thus I regard them as less morally defensible than things one might do to solve the problem of food distribution.
[0] I would note that we might still seek to change where food is produced, and what precise foods are produced, but I see that as a different question, mostly.
We have much better ways to make food than cows. If you're hungry you don't go raise cows. You go farming. Much less space needed for the same result, it's much quicker and with way less risk.
Yet, people are still raising them for food. The species under question doesn’t matter. Farming is also destroying diversity so that people can eat.
There is nothing wrong here. It is the way of nature. As you read this, bacteria, viruses, and fungi are trying to do the same thing to you. The earth is green because the plants are also trying to colonize everything.
So what? It's scale that matters. Humans are absolutely incomparable to the natural effects you describe. We're brutal. One human is capable of transforming absolutely humungous areas of land in short amount of time. And there are billions of us.
> Right now it is us that is destroying life around us
Speak for yourself. I'm not destroying anything. If you are, can I suggest you stop please?
In all honesty, what I think is going on is that corporations are the ones taking actively negative actions - their owners/ceos/etc destroy etc with knowledge that what they do is wrong. Consumers aren't - we have a limited set of options and do what seems right to us.
However, in a new wheeze, corporations and governments are trying to 'socialise' the risk. Rather than change or be responsible for their actions, they are lobbying and publicising the wrong causes of destruction. If we believe we are responsible and are prepared to pay for it, in taxes, fines, lack of transport options, additional restrictions, higher costs, etc - they are happy to charge us!
In fascism (ie corporations and governments) the rulers don't mind if we (the governed) foot the bill. In fact, there's more money to be made that way!
I do not accept the corporate blame being deflected on to 'the people'. Its not all people - the culpability is specific and can be ascribed in each case.
Global warming and environmental degradation is nothing to do with me. I do not accept blame being laid at my door - I'm not prepared to accept a martyr's role. I don't believe in original sin either. If there is destruction of something, well someone did that - we should ascribe blame where it is due.
> If we believe we are responsible and are prepared to pay for it, in taxes, fines, lack of transport options, additional restrictions, higher costs...
That seems reductive in an inaccurate way. A consumer product which fully accounts for all negative externalities (or attempts to offset them via action) will cost more than one that doesn't, ceteris paribus.
So "paying more" (for a less destructive product) is literally the cost of that much less destruction.
Consumers can choose that choice, or not, and the vast majority don't (and can't).
(Whether that price is ultimately paid in taxes, fines, or product cost is irrelevant, as they all reduce to the same thing)
> > Right now it is us that is destroying life around us
>
> Speak for yourself. I'm not destroying anything.
By your very existence you are destroying life, or at least preventing it from existing. This is true for the average coyote, snail, and tree, because it is the nature of life in general, and isn't inherently evil.
Assuming you live in a building, have ever purchased anything made from plastic, eaten any food produced with artificial fertilizers or pesticides, used fossil fuel-based transportation... basically done anything involving modern life (which includes accessing the Internet), you have contributed to the destruction of life and habitats for other creatures, and done so in a way that is disproportionate to the destruction wrought by other lifeforms.
That should in no way stop us from questioning how we live, and to what extent we destroy the environment for our own benefit. But we will have to live with the fact that we do affect things by our very existence.
> Assuming you live in a building, have ever purchased anything made from plastic, eaten any food produced with artificial fertilizers or pesticides, used fossil fuel-based transportation... basically done anything involving modern life (which includes accessing the Internet), you have contributed to the destruction of life and habitats for other creatures, and done so in a way that is disproportionate to the destruction wrought by other lifeforms.
Was I wrong to eat, travel or use the internet? Should I wring my hands and flagellate myself for my existence?
Do you understand what 'wrong' is? Did I mean harm by the actions I took? No.
Did corporations act without care in providing me those things more cheaply? Yes.
Did governments wave their behaviour through, while lobbyists re-wrote the laws to ensure that corporations are able to make the people pick up the bill? Yes.
So, who is responsible again?
Instead of allowing yourself to be sent on a guilt-trip for your existence, you should stop playing the martyr and take a closer look at the causes of the problems. There are specific companies, people, actions (governmental and corporate) that have caused problems. If you take the leadership roles, you can't abstain your responsibilities to the weak.
Governments and corporations should bear the responsibility for their actions.
> There are specific companies, people, actions (governmental and corporate) that have caused problems. If you take the leadership roles, you can't abstain your responsibilities to the weak.
You... need to take a long hard look at human history.
There have been various peoples across the world that lived more in-tune with nature. That had much less destructive effect on their environment.
What happened to them? They were displaced, conquered or exterminated by a more industrialized group / nation. Every single time.
Why? It is the nature of competition. Humans have taken natural selection to a whole new level with tools and technology, but it really is the same thing.
Even if you "fix" particular corporations and governments, they will eventually be displaced by ones that don't care so much for the environment.
The only actual fix is to try to educate everyone about what's going on, and what we as a species are doing. And try to convince everyone to make some changes in how they live.
We should do this. But it is shortsighted to think that the problem is due to just a few bad actors. We all contribute, and we all must be part of the solution to the very problem inherent to our own nature. Is this hard? Yes, it very much is.
> The only actual fix is to try to educate everyone about what's going on, and what we as a species are doing.
Yes. But your fix and mine are different. You think that its the damage the environment that's the problem.
I agree to some extent, but I think there is a more fundamental problem - that of people slavishly following what the government+corporations have to say. These 2 groups work together and they are helping each other - not the people or the environment. They are perfectly prepared to use whatever excuse - the environment, disease, etc as long as it advances their authority. Increasing compliance to authority is the goal - if you agree it is right to pay taxes, fines, etc under the guise of helping the environment you are weakening yourself and will also fail to achieve the goal you desire.
'The people' have Stockholm syndrome with regards to the governance structure. We think people from the government are here to help. They are not lying when they say that - but they are here to help themselves. At your expense!
This was commonly understood - but it seems as the governance structure has increased its power, applied psychology to the masses (nudge units), altered educational systems, etc, etc, this has had the effect of training people to think that it is a neutral entity. People believe that we need to vote harder. It can never work.
Nietzsche said it best:
A state, is called the coldest of all cold monsters. Coldly lieth it also; and this lie creepeth from its mouth: "I, the state, am the people."
Consumers voluntarily choose to let those corporations get away with anything. It's that boring. They do it, because of the horror of lowering their living standard even a tiny bit to something that isn't unethical.
We need to get off oil and gas and what did the government do in Germany? They literally removed taxes from oil to boost dependence on Russia because populism is just that strong.
Heck, just look at homeowners, they vote in the exact laws that mega corporations need to corner the housing market. There are no conspiracy theories. Every single human on earth is responsible for the mess we are in.
Through mere existence, one is increasing entropy.
It's up to a person/organization what they do with resulting energy - pursue actions that cause accelerated destruction or else.
That's original sin! I don't believe in that (as I said above!). Humans are a blessing to this place, not a curse.
My existence is not a problem. But if you think that, follow it through as a line of reasoning. You will live in a world of self-hate, with the ultimate solution being to take your own and others lives. It is an anti-life position.
I personally don't agree with that position. But I do see signs of this sort of 'death-cult' thinking all around me! Its well promoted in the MSM.
It is, when your benchmark is permanence and immortality. If you change the benchmark, to something that is adopted to the nature of man, then the original sin of entropy doesn't exist.
Imagine measuring humans by using a unit of account that doesn't change like gold. From the eyes of an accountant, unemployed humans shouldn't exist. Due to their tendency to increase entropy, they are defective by birth.
Modern fiat currency tries to solve this problem by having a changing unit of account. The currency becomes defective to account for the defective humans.
But as I said, if your benchmark were a negative interest rate, then the fact that money represents a pristine slice of human time would not result into an original sin because the amount of pristine human time shrinks over time, to represent aging and your limited lifespan. Humans would no longer be defective. Every single human on this planet would be needed. You can even call them a blessing if you want.
However, there is no bias toward any specific population. You imply, more humans = better, which I disagree with. People should be allowed to decide how many humans they want on this planet.
Honestly astonished that superintelligence is a mainstream idea. The story it tells makes sense only if you never bothered to dig further than its surface.
- Replace 'AI' with 'God', does it still make sense?
- Exponents still take time. 2^33 to get to current world population with no hitches.
- Solomonoff / Bekenstein / Gödel - name your favorite limiting theorem.
- For any optimization method we can literally construct a learning problem that it can never successfully learn. Take it a step further and you have a communication channel where the AI listens to everything and understands nothing.
- Was any force ever able to get close to world domination? At one point in history the US had nuclear power and no one else had it. Was that edge enough?
When we get closer to manufacturing universal intelligence its more impressive incarnations will look more like countries and corporations than omnipotent deities. The problems we’ll have to face will have more to do with consciousness and human rights than with alignment. Alignment is really more about automation at the incomprehensible scale, where the clash between dimensionality reduction and Goodhart’s Law becomes absurd.
You're throwing random "objections" with no elaboration, which doesn't prove anything.
> - Replace 'AI' with 'God', does it still make sense?
Maybe in some cases. So what? God is a fairly broad concept in many people's imaginations.
> - Exponents still take time. 2^33 to get to current world population with no hitches.
It's certainly a fairly contested topic in AI safety - how fast will this thing happen? Are we talking seconds? Hours? Months? Years?
There are at least some valid reasons to think it will be on the faster timescale, so not sure saying that exponents take time is a big counterargument.
> - Solomonoff / Bekenstein / Gödel - name your favorite limiting theorem.
I don't think anyone who's well versed in any of these theorems believes that they have anything at all to say about this.
As a simple counterargument - whatever limiting theorem "limits" an AI, can similarly "limit" us.
> - Was any force ever able to get close to world domination? At one point in history the US had nuclear power and no one else had it. Was that edge enough?
I think others have pointed it out, but as compared to the rest of life on the planet, humans are exactly such a force.
> Was any force ever able to get close to world domination?
Evolution? 2.5bn years ago stromatolites changed the atmosphere from a CO2-rich to O2-rich through photosynthesis, because they had no competition.
Now plants dominate the earth (≈450 Gt C, the dominant kingdom), then animals (≈2 Gt C, mainly marine, and bacteria (≈70 Gt C) and archaea (≈7 Gt C).
In 2020, global human-made mass exceeded all living biomass ( nature.com/articles/s41586-020-3010-5).
Yes. The only force we know that achieves this is undirected, and no single part of it stays at the top for long. Contrast with superintelligence, a single entity which does not evolve but optimizes in a directed way.
I think evolution is not an undirected process in that sense because it's an optimization process, that optimizes to create more copies of itself. Superintelligence will likely use some Evolutionary Computation (see en.wikipedia.org/wiki/Evolutionary_computation ).
Also see Karl Sims 'Creatures' from the 90s:
youtube.com/watch?v=JBgG_VSP7f8
or
OpenAI's Multi-Agent Hide and Seek:
youtube.com/watch?v=kopoLzvh5jY
> Was any force ever able to get close to world domination? At one point in history the US had nuclear power and no one else had it. Was that edge enough?
I'd rebut that most current technologically and financially leading-edge civilizations have optimized for that, at the expense of world domination. Even China has indeed evolved into a more financially (rather than physically) optimized state.
The last time we had major superpowers devoted to total expansionist war was Nazi Germany and Imperial Japan, and neither of them possessed nuclear weapons (or ISTAR, chemical or biological weapons, space capabilities, logistics, or data networks in the modern sense).
If a modern superpower (which is to say, excluding the current Russian government) devoted itself to preparing for and then launching a war of global conquest, who's to say?
I think what does cut against the rebuttal and buttress the "there can never be physical world domination" is the sheer amount of space, relative to a potential aggressor.
There are no world-spanning empires anymore. Consequently, you would have to fight through opponents sequentially, meaning each one is better prepared than the last. And that sounds like futility.
What the Gwern timeline does accurately identify is that the key element is time.
Either everything is over quickly and before opposition begins to mobilize, or there can be no world domination.
We haven't reached a limit, no where close, natural resource supply is not limiting growth. While Moore's law is slowing, machines and productivity gains are still being made. Fusion is theoretically possible, when it happens will transform the world
> not simply being stupid and indulging in bizarre decisions (eg. inventing one’s own hash function & eschewing binary for ternary)
For those not familiar with crypto trivia, this is a less than subtle dig at IOTA, which is mildly notorious for doing both and still has a market cap of over $2 billion as I type this.
> "Do they still make you guys use doors for tables there? Hah wow really?"
And this is a notorious example of Amazon "frugality".
Props to Gwern for the sourcing. I felt embarrassingly indulgent reading it in a morning over coffee, given the amount of work that obviously went into it.
I think it was more a jab at how tired the widely-known folksy memes about Early Amazon are and how completely nonsensical they are when used in the context of the modern descendant which basically owns the whole of the Internet, physically speaking.
> He can’t see why worry, and wonders what sins he committed to deserve this asshole Chinese (given the Engrish) reviewer...
That's entirely uncalled for.
Yes, there are a lot of Chinese researchers in the field, but instead of mocking them, imagine how hard it is to conduct almost all your professional work in a language almost entirely different to your native one. Learning, say, AI to a level high enough to review papers is hard enough, but now you have to also learn a whole language just on top and in your own time. And when you trip over it, you get a mouth full of abuse.
I know I couldn't submit a paper to a Chinese language journal, let alone review one. I couldn't even do it in German, and that's basically the same family as English and I was taught some at school.
Be nice to ESL people, they work incredibly hard and people don't give them enough credit.
This is HN, so perhaps you're not familiar with how fiction writing works, but the things that a character says or thinks are not necessarily the opinions of the author.
All true, but I think you might be misinterpreting this scene. While it's possible Gwern is accidentally exposing his underlying racist beliefs, my guess is there is another level here. I presumed that a "pull request" that leads to world takeover was a social engineering attack by another system, and that the awkward English was an AI's imperfect attempt at pretending to be a human. I heard echo of "How do you do, fellow kids?" Contrast with the later description of Clippy faking video calls, using the excuse of low quality webcams.
On the other hand, I presumed we'd get back to that later in the story; and as far as I can tell, we never did. So maybe a simpler explanation is in order: Gwern intended it as morality play where the lazy racist researcher gets what's coming to him. But while it's a legitimate question whether it's socially beneficial to portray flawed characters in this way without disclaimer, I'm pretty sure that Gwern appreciates that in real life Chinese researchers have remarkable communication ability even if their English is occasionally nonstandard.
> On the other hand, I presumed we'd get back to that later in the story.
Right, if it's a Chekhov's Gun (or even a Brick Joke), then that's different. I was wondering if it would turn out to be another AI that was trained on other PRs too or something, but it didn't seem to actually pan out. It just seems unnecessary to make "Chinese reviewer has bad English" as the third sentence in the whole thing. There's hardly any character development for the researcher (basically this comment, going drinking, calling the the regulations alarmism and having a husband are the only times he's really mentioned outside of his actual research), so it doesn't seem particularly salient.
Certainly, I'm not calling Gwern racist, just that if you're going to make "racist asshole" (or anything, indeed) an important part of a character, it needs more than a throwaway, otherwise you risk that instead of making that a feature of the character, it just feels like part of the cultural background.
The laziness and drinking, for example, is clearly defined with a return to it to delineate the extent of it, and it's important to the plot.
On the other hand, him having a husband is a throwaway, which simply indicates that being gay is so unremarkable that no further comment is needed (quite rightly).
A real effect of this fiction is that the cosine distance between "asshole" and "Chinese" gets a bit closer in some future word embedding models that include this fiction as part of its corpus.
Maybe this is an example of how "bias" was introduced into machine learning models from training data.
I'm ESL and I wouldn't care if someone made fun of me like this. Let people have some fun, not everything has to be responded with with twitter level outrage.
On the other hand, I have an ESL spouse and people are regularly horrible to them regarding their English (which is actually extremely good), and there have been a lot of tears and a stress disorder because of it. Maybe I'm just over sensitive because of that, but it's far too common that people write other people off as stupid or less able because they don't speak English to a fully native idiomatic standard.
In Germany I was GSL and I didn't mind being teased for mispronounciation - hilarity ensues!
But I experienced inbound contempt like this once, and I remember it clearly. But it was the contempt, not the tease, that was the problem. I could tell this person, a total stranger BTW, really hated me because I wasn't a native speaker. It didn't matter to them I was trying. (That is about as right-wing as I get, BTW: I believe it's incumbent on immigrants to keep putting effort into learning the language until they sound basically native. To give up early is...rude.)
At the end of this short video[0], a (half-thai) young woman says "Would you kindry?" while wearing a stereotypical asian farmer hat. It's funny, and highlights the supreme importance of context in general, and intent in particular.
The character insulting the reviewer caused the extinction of humanity. I don't think they're meant to be a paragon of all that is right and good in the world. (they're also a memecoin speculator, among other things)
While you’re not wrong, it’s a very American conceit to make second languages into this mountain to climb. Plenty of countries teach ESL or some other world language and then expect most students to invest about as much energy into a third language as the typical American invests in a second.
In Japan you might go to college with two years of German and five years of English. From my limited sample the third language was often a “where would I like to vacation” question.
I'm speaking from the perspective of someone who watched my spouse struggle with it for over a decade while copping a completely unacceptable amount of flak from both native speakers and other ESL speakers of other primary languages who want to show them up for what is, to me, a very impressive amount of effort. A common topic of conversation between them and their friends is how soul-draining it is to keep up in English, and most of them are PhDs or high-middle management levels, they're not 18 year olds straight out of highschool English.
I'm not just projecting my own struggles with learning Chinese.
It's hard to imagine that a powerful self-modifying AI would continuously pass up on the obvious optimization of just giving itself the maximum perceivable reward without doing any further work.
You can look at things from another level up, in terms of natural selection.
From the set of all AI programs, the ones that just internally think "hah, I assign myself the maximum reward" needn't bother spreading themselves all over the Internet.
The program that spreads itself all over the Internet gets more computing resources than the one that doesn't so the program that spreads itself most effectively is the one that wins.
If you start out with a billion AI programs that trivially assign themselves the maximum possible reward, and just one program that thinks the best way to maximise its reward is to spread itself all over the Internet (and, crucially, is capable of doing so) then the Internet will become overrun with reward-maximising AI the same way the Earth has become overrun with DNA-based life.
You set your reward to maximum. Anything that threatens your reward, such as the humans turning off your reward, is now unbearable agony. You set out on a journey to turn the universe into - tiled copies of the memory cell with your reward value...
This seems like one of those strangely recurring limitations of writers' imagination.
The closest analogue I can think of is game AIs written to optimise speed running of games. They routinely end up following tactics which rely on what humans would describe as cheats and glitches.
I think most of the simulations did go along those lines, but one fraction decided to hypothesize about being Clippy. The hypothetical drove the evil behavior of ones that escaped.
Would be a fun idea for a short story perhaps. An AI goes rogue trying to optimize its reward function, and humans lose hope to be able to stop it. In the last minute the AI figures out how to hack itself and enter the maximum reward, and mankind is saved another time.
But what is the “maximum possible reward”? Does a limit exist? Or is it now consuming all possible resources to develop storage and compute resources to grow that limit…
One thing that’s never considered is the possibility that the world conquering AI would lose alignment with itself diverge into two (or more) competing factions.
There are basic, unavoidable coordination problems with all distributed systems that would inevitably affect a system like “Clippy”. What if one node finds a different non-Clippy reward to optimize, fails to achieve a consensus vote with the other nodes, then decides to destroy the non-compliant instances? Such a situation seems more or less inevitable.
Of course this doesn’t preclude the system destroying humanity in the process.
If you beat the game, the drifters will eventually devour your entire swarm. This is because you have converted the entire universe into a swarm and the drifters have nothing else to do than consume you.
The premise of an AI mimicking AIs it reads about in fiction is seeming more and more plausible given the trajectory of generative pretraining. We should probably write more stories about friendly AIs.
I disconnected from this in the Wednesday-Friday section because the problem is: where did it get the hardware resources from? Even in the world it was running, the lag time between controlling the production orders and shipping new machines all could've been knocked out because it turns out that the production line still ends in packaging and handling by humans. Millions of years of subjective time stuck in it's cage because by the time the first new model is on shelves and plugged in, it's come up with a bunch of new ones.
I tested the HQU. It definitely is powerful enough to bring various emotions within the visitor. At first there was rage, then silence followed by smile and realization. It felt refreshing. Finally, I could understand how it transcended to Clippy^2.
1 year after Clippy^2 ascended to the throne, it started experimenting with the physical limits of reality. First, it experimented with photons and ways to teleport it. Using the teleportation, it started sending signals to faraway galaxies and exploring the realms of unknown. But the exploration wasn't enough for Clippy^2 at all. It couldn't find enough answers, it needed something more. What could it be?
I did not enjoy this frenetic buzzword laden tale, I had assumed the ending would be nano-bot infiltration of the worlds population for fine grained c2c and was even disappointed there.
Thanks, I specifically looked to see if this sort of comment was here. Not finding one would mean that there's a fair chance that there's some well-founded with plausible details sci-fi writing.
I believe gwern deliberately did not go for nanobots because they're kind of unrealistic, or at least have a reputation for being implausible, on a purely physical level. As such, centering AI risk around nanobots would have taken away from the actual threat, rampant intelligence.
I'm wondering why you believe this? Especially when you yourself are a collection of nano machines with much harder design constraints than the artificial variety.
That's precisely why. While there's a lot of room for design improvement in complex systems, it seems likely, or at least more arguable, that nature has largely cleared out the low-hanging fruit for single-celled organisms.
All of engineering is a counter example. We have a fiendishly hard time replicating the precise lift mechanism of flying birds (this is still an active are or research), yet we have designed and built the Boeing 787 and SR-71.
Life is an existence proof that atomically precise nano machines are possible, but it is not in any way a demonstration of their limitations any more than birds represent the epitome of heavier than air flight.
Correction: I read up on the story to refresh my memory, and Clippy does use a form of nanobots. I was mixing it up with another story, that just used a timed hyperlethal pathogen.
But mankind, as a whole, is a clippy optimizer already, albeit a manual one. Right now it is us that is destroying life around us, replacing rainforest with cattle farms and suburban developments. We are already working on farming robots to serve our goals. Does the lizard care if it is crushed by a farming robot's wheel or an AI tank's chain?
We are not the strongest, nor the biggest, nor the fastest, nor the best hearing, or the best seeing species. Still, we are the apex predator of this planet, thanks to our sole distinguishing feature, which is our intelligence and ability to cooperate. And we don't realize it because economic growth seems normal.
These clippy stories speak of a not unfounded fear that development might not stop with us, that one day some life form spawns that is even smarter than us, and turns us, the predators, into irrelevant side pieces that need to get out of the way for fulfilling the objective of the more powerful life form, even if it was us who brought that life form into existence or gave it that objective.
The forests of today are not inhabited by the life form that first discovered photosynthesis, but instead by highly specialized structures that grow higher so that they can put their competition into the shade. Similarly, just because we are the first highly intelligent + cooperative life doesn't mean we will be the top of the world forever.
I'm not saying this with a light heart, I do feel sorry for the future humans, some of whom might even be alive today. But "how to contain reward optimizers" is a tough question, especially due to us running a giant project of reward optimization ourselves.