On "group with preferences orthogonal to your own": the idea is you can give the members exactly what they want, and then independently get whatever you want as well. Since they're indifferent to the things you care about, you can choose those things however you please.
At least in the two most recent American elections (2016 and then the 2018 midterms) it seems like it was very much not the case of people racing for the most focused benefits and most diffuse cost, but rather for the most efficient way to galvanize their voters, cost be damned.
I expect that politics in most places, and US Congressional politics especially, is usually much more heavily focused on special interests than the overall media narrative would suggest. For instance, voters in Kansas care a lot about farm subsidies, but the news will mostly not talk about that because most of us find the subject rather boring. The media wants to talk about the things everyone is interested in, which is exactly the opposite of special interests.
Also I am extremely skeptical that racial issues played more than a minor role in the election, even assuming that they played a larger role in 2016 than in other elections. Every media outlet in the country (including 538) wanted to run stories about how race was super-important to the election, because those stories got tons of clicks, but that's very different from actually playing a role.
Or does the model already support this in a way that I don't notice?
Nope, you are completely right on that front, poor information/straight-up lying were issues I basically ignored for purposes of this post. That said, most of the post still applies once we add in lying/bullshit; the main change is that, whenever they can get away with it, leaders will lie/bullshit in order to simultaneously satisfy two groups with conflicting goals. As long as at least some people in each constituency see through the lies/bullshit, there will still be pressure to actually do what those people want. On the other hand, people who can be fooled by lies/bullshit are essentially "neutral" for purposes of influencing the political equilibrium; there's no particular reason to worry about their preferences at all. So we just ignore the gullible people, and apply the discussion from the post to everybody else.
I think you're fine on that front, or at least plenty good enough for me.
Mazes and Duality was last time, we're doing something different this time.
I think this is the RAND study cited there.
Still under development to a large extent, but my own research is intended to be alignment/foundations research, and makes some direct predictions about deep-learning systems. Specifically, my formulation of abstraction is intended (among other things) to answer questions like "why does a system with relatively little resemblance to a human brain seem to recognize similar high-level abstractions as humans (e.g. dogs, trees, etc)?". I also expect that even more abstract notions like "human values" will follow a similar pattern.
A good rule of thumb is 50 occupiers per subject-nation citizen
Do you have a source on this? I'd be interested to read more on the subject, but don't really know where to look.
This is definitely an area where I'm not an expert at all and I'm just armchair speculating, so take it all with an awful lot of salt. That said, if I were going to write an essay on the subject, here's some rough notes on what it would be.
War is combat between groups. A large majority of the commentary on the subject focuses on the "combat" part, and applies in principle even to individuals trying to hurt/kill each other. But looking at the outcomes of major wars over the past ~50 years, it's the "groups" part which really matters. The enemy is not a single monolithic agent, they're a whole bunch of agents who have varying incentives/goals and may or may not coordinate well.
With that in mind, here's a key strategic consideration: assuming we win the war, how will our desired objective be enforced upon the individuals who comprise the enemy? How will the enemy coordinate internally in surrender, in negotiations, and especially in enforcement of concessions? What ensures that each of the individual enemy agents actually does what we want? I see three main classes of answer to that question:
One of the major problems that Western nations have run into in the past half century is that we're in wars where (a) we don't just want to kill everyone, and (b) there is no strong central control of the opposition (or at least none we want to preserve), so we're effectively forced into the last scenario above. If we want to enforce our will on the enemy, we effectively need to build a state de novo. In some sense, that sort of war has more in common with policing and propaganda than with "war" as it's usually imagined, i.e. clashes between nations.
When we picture things that way, fancy weapons just aren't all that relevant. The hard part of modern war is the policing and eventual nation-building. For that project, tools like personalized propaganda or technological omniscience are huge, whereas aimbots/battlebots or supply-chain superiority serve little role besides looking intimidating - an important role, to be sure, but not one which will determine the end result of the war.
In bottleneck terms: the limiting factor in achieving objectives in modern war is not destroying the enemy, but building stable nations de novo.
I think the advancements in command and control tech that are likely to happen in the next 20 years are more important than everything else on this list combined.
This was one of the most interesting titles I've seen on a LW post in a while. I look forward to reading further posts in the series.
Were any conclusions unsupported?
There were a lot of places where I wondered about the process which produced some model. For instance:
Mayoral candidates are often selected in an internal tribal election after which all tribe members vote for the candidate, or candidates may negotiate an alliance of tribes for the election. In turn, the tribal supporters expect family members will receive positions in the municipality. As a result, personnel costs dominate municipal budgets, averaging 60-65% of their budgets to the detriment of capital costs...
Did this model come from talking to people and asking how government processes work? If so, how many people, how were they sampled, who collected these reports, how much interpretation/simplification of the narrative went into it? Or, if the model is coming from some data, how much data and where did it come from - e.g. if mayoral candidates are "often" selected in an internal tribe election after which "all" tribe members vote for the candidate, what kind of numbers are those really, and where do those numbers come from? Or when you say "personnel costs average 60-65% of budgets", what cities are we talking about, and during what time period?
In short: you've said what we think we know, but I'm unsure how we think we know it.
This doesn't necessarily need to be a book-length description of the methodology of every cited study, but at least I'd like to know things like "<people> collected election data on <N> cities in Jordan and found <numbers>, which they interpreted to mean <...>" or "<people> surveyed <N> citizens in the city of <blah> in a free-form fashion to understand how the processes of government are understood locally; the following model is their summary...". I'm not looking for rigorous statistics or anything, just a qualitative idea of where the information came from. For instance, the sentence "Mayors complain that they cannot accomplish objectives due to demands from their councils to hire relatives [Janine Clark]" is perfect - it tells me exactly where the information came from.