So why is it that people who would never dream of sending their friend who tried coke to prison or even the friend who sold that friend some of his stash how do we end up with draconian drug laws?
There's really nothing left to explain here. They would never dream of sending their friend who tried coke to prison because they're friends. The same doesn't hold for strangers. Similarly, you'd probably let a friend of yours who just lost his home spend the night at your place, but not any random homeless person.
Installing the Hide YouTube Comments chrome extension stopped my habit of reading and participating in the toxic comment section of YouTube. Absolutely essential for mental hygiene if you suffer from the same habit but at same time don't want to miss out on the great video content there.
ML can generate classical music just fine but can't figure out the chorus/verse system used in rock & roll.
This statement seems outdated: openai.com/blog/jukebox/
To me this development came as a surprise and correspondingly an update towards "all we need for AGI is scale".
I don't really know SC2 but played Civ4, so by 'scouting' did you mean fogbusting? And the cost is to spend a unit to do it? Is fogbusting even possible in a real life board game?
Yes. There has to be some cost associated with it, so that deciding whether, when and where to scout becomes an essential part of the game. The most advanced game-playing AIs to date, AlphaStar and OpenAI5, have both demonstrated tremendous weakness in this respect.
What does it have to do with Markov property?
Markov property refers to the idea that the future only depends on the current state, thus the history can be safely ignored. This is true for e.g. chess or Go; AlphaGoZero could play a game of Go starting from any board configuration without knowing how it got there. It's not easily applicable to Starcraft because of the fog of war; what you scouted inside your opponent's base a minute ago but isn't visible right now still provides valuable information about what's the right action to take. Storing the entire history as part of the "game state" would add huge complexity (tens of thousands of static game states).
Is fogbusting even possible in a real life board game?
Yes, see Magic the Gathering for instance (it's technically a card game, but plenty of board games have card elements integrated into them). Or, replace chess pieces with small coin-like tokens with information about their identity written on the down-facing side (this wouldn't work for chess in particular because you can tell the identity by the way pieces move, but perhaps some other game with moving pieces).
BTW, what is RL?
RL stands for reinforcement learning, basically all recent advances in game-playing AI has come from this field and is the reason why it's so hard to come up with a board game that would be hard to solve for AI (you could always reconfigure the Turing test or some other AGI-complete task into a "board game" but that's cheating). I'd even guess it's impossible to design such a board game because there is just too much brute force compute now.
Great Post. The last part is a major update for my model of how drug legalization opponents think about the issue. Perhaps, just like the climate change debate, it's all value disagreements masked as factual disagreements.
Excellent question! Once again, late to the party, but here are my thoughts:
It's very hard to come up with any board game where humans would beat computers, let alone an interesting one. Board games, by their nature, are discretized and usually perfect information. This type of game is not only solved by AI, but solved by essentially a single algorithm. Card games with mixed strategy equilibrium like Poker do a little better, although Poker has been solved the algorithm doesn't generalize to other card games without significant feature engineering.
If I were to design a board game to stump AIs, I would use these elements:
The last element in particular is a subtle art and must be used with caution, because it trades off intractability for RL against intractability for traditional AI: If the pattern is too rigid the programmer could just hard-code it into a database.
If we considered video games instead, the task becomes much easier. DOTA 2 and Starcraft 2 AIs still can't beat human professionals at the full game despite the news hype, although they probably can beat the average human player. Some games, such as Chronotron or Ultimate Chicken Horse, might be impossible for current AI techniques to even achieve average human level performance on.
Planetside 2 is fascinating to me. It's one of a kind, not just in the sense of being a MMO shooter, but also giving the player a sense of being part of something big and magnificent, collaborating with not only your small circle of friends, but also with hundreds of other people towards a common goal. This sort of exciting experience is only found in real world projects otherwise (EVE online and browser games notwithstanding; those are more spreadsheets than games to me), and I'm really starting to think this is a hugely neglected opportunity for the gaming industry. Who knows, maybe it will be the next big trend after Battle Royale? Although shooter games with their chaotic and computationally expensive nature is not the best fit for it - perhaps turn-based strategy games instead?
3) Am I, as a person, actually capable of making a positive difference in general or is my presence generally going to prove useless or detrimental?
To be blunt, I don't think you are making much of a positive difference in terms of changing the exploitative nature of the world, which you seem to be passionate about in your writing. I know it sounds terribly rude but couldn't find another way to put it lest I treat it as a rhetorical question.
I'm not saying you should stop doing what you're doing or that your work isn't valuable in general, any more than I'm saying athletes and theoretical physicists are morons because it's difficult to become a millionaire that way. It's just that in a world overflowing with competing memes, playing politics (in the broader sense of recruiting more people for your tribe) is not a low-hanging fruit in general. I would say the rationalist community isn't so much an army of generals with no soldiers to command, as it is an army of recruiters with no jobs to offer (that is if you conceive rationality as a project rather than just an interest).
Is this something I can improve and if so, how?
Again, not saying you should prioritize changing the world (over doing what you like and enjoy), but in case you want to, I'd say pick a EA cause (you probably know the details better than me) and make an actionable plan. For example, if your preferred cause is AI alignment, enroll in a MOOC on AI. Less meta-level pondering, more object-level work.
Einstein and von Neumann were also nowhere near superintelligent; they are far better representatives of regular humans than of superintelligences. I think the problem goes deeper. As you apply more and more optimization pressure, statistical guarantees begin to fall apart. You don't get sub-agent alignment for free, whether it's made of carbon or silicon. Case in point, human values have become adrift over time relative to the original goal of inclusive genetic fitness.
Dogs and livestock have been artificially selected to emphasize unnatural traits to the point that they might not appear in a trillion wolves or boars
I think you're overestimating biology. Living things are not flexible enough to accommodate for GHz clock speed or lightspeed signal transmission despite having had evolution tinkering on it for billions of years. One in a trillion is just 40 bits, not all that impressive, not to mention dogs and livestock took millennia of selective breeding; that's not fast in our modern context.