All Posts

Sorted by Magic (New & Upvoted)

Friday, June 5th 2020
Fri, Jun 5th 2020

Shortform
3Bob Jacobs7hContinuing my streak of hating on terms this community loves [https://www.lesswrong.com/posts/dq8rwmWXXQ4D4T6YD/should-we-stop-using-the-term-rationalist] . I hate the term 'Motte-and-bailey [https://en.wikipedia.org/wiki/Motte-and-bailey_fallacy]'. Not because the fallacy itself is bad, but because you are essentially indirectly accusing your interlocutor of switching definitions on purpose. In my experience this is almost always an accident, but even if it wasn't, you still shouldn't immediately brand your interlocutor as malicious. I propose we use the term ' defiswitchion' (combining 'definition' and 'switch') since it is actually descriptive and easier to understand for people who hear it for the first time and you are not indirectly accusing your interlocutor of using dirty debate tactics.

Thursday, June 4th 2020
Thu, Jun 4th 2020

Shortform
12Spiracular1dA PARABLE ON VISUAL IMPACT A long time ago, you could get the biggest positive visual impact for your money by generating art, and if you wanted awe you could fund gardens and cathedrals. And lo, these areas were well-funded! The printing press arrived. Now, you could get massive numbers of pamphlets and woodcuts for a fraction of the price fo paintings. And lo, these areas were well-funded! Then tv appeared. Now, if you wanted the greatest awe and the biggest positive visual impact for your money, you crafted something suitable for the new medium. And in that time, there were massive made-for-tv propaganda campaigns, and money poured into developing spacecraft, and we got our first awe-inspiring images of Earth from the moon. Some even claim the Soviet Union was defeated by the view through a television screen of a better life in America. And then we developed CGI and 3D MMORPGs. And lo, the space program defunded, as people built entire cities, entire planets in CGI for a tiny fraction of the cost!
6toonalfrink1dIbogaine seems to reset opiate withdrawal. There are many stories of people with 20 year old heroin addictions that are cured within one session. If this is true, and there are no drawbacks, then we basically have access to wireheading. A happiness silver bullet. It would be the hack of the century. Distributing ibogaine + opiates would be the best known mental health intervention by orders of magnitude. Of course, that's only if there are no unforeseen caveats. Still, why isn't everybody talking about this?
1Ariel Kwiatkowski21hLooking for research idea feedback: Learning to manipulate: consider a system with a large population of agents working on a certain goal, either learned or rule-based, but at this point - fixed. This could be an environment of ants using pheromones to collect food and bring it home. Now add another agent (or some number of them) which learns in this environment, and tries to get other agents to instead fulfil a different goal. It could be ants redirecting others to a different "home", hijacking their work. Does this sound interesting? If it works, would it potentially be publishable as a research paper? (or at least a post on LW) Any other feedback is welcome!

Wednesday, June 3rd 2020
Wed, Jun 3rd 2020

Personal Blogposts
Shortform
6eukaryote2dI have a proposal. Nobody affiliated with LessWrong is allowed to use the word "signalling" for the next six months. If you want to write something about signalling, you have to use the word "communication" instead. You can then use other words to clarify what you mean, as long as none of them are "signalling". I think this will lead to more clarity and a better site culture. Thanks for coming to my talk.

Tuesday, June 2nd 2020
Tue, Jun 2nd 2020

Shortform
7toonalfrink3dI did all the epistemic virtue. I rid myself of my ingroup bias. I ventured out on my own. I generated independent answers to everything. I went and understood the outgroup. I immersed myself in lots of cultures that win at something, and I've found useful extracts everywhere. And now I'm alone. I don't fully relate to anyone in how I see the world, and it feels like the inferential distance between me and everyone else is ever increasing. I've lost motivation for deep friendships, it just doesn't seem compatible with learning new things about the world. That sense of belonging I got from LessWrong is gone too. There are a few things that LW/EA just doesn't understand well enough, and I haven't been able to get it across. I don't think I can bridge this gap. Even if I can put things to words, they're too provisional and complicated to be worth delving into. Most of it isn't directly actionable. I can't really prove things yet. I've considered going back. Is lonely dissent worth it? Is there an end to this tunnel?
1lc3dI just launched a startup, Leonard Cyber [https://leonardcyber.com]. Basically a Pwn2Job platform. If any hackers on LessWrong are out of work, here are some invite codes:

Monday, June 1st 2020
Mon, Jun 1st 2020

Shortform
3Bob Jacobs4dWith climate change getting worse by the day we need to switch to sustainable energy sources sooner rather than later. The new Molten salt reactors [https://www.nextbigfuture.com/2019/05/seaborg-molten-salt-reactor-will-fit-on-a-truck-and-cost-less-than-coal-power.html] are small, clean and safe, but still carry the stigma of nuclear energy. Since these reactors (like others) can use old nuclear waste as a fuel source, I suggest we rebrand them to "Nuclear Waste Eaters" and give them (or a company that makes them) a logo in the vein of this quick sketch I made: https://postimg.cc/jWy3PtjJ [https://postimg.cc/jWy3PtjJ] Hopefully a rebranding to "thing getting rid of the thing you hate, also did you know it's clean and safe" will get people more motivated for these kinds of energy sources.
3Sherrinford4dYou would hope that people actually saw steelmanning as an ideal to follow. If that was ever true, the corona pandemic and the policy response seem to have killed the demand for this. It seems to become acceptable to attribute just any kind of seemingly-wrong behavior to either incredible stupidity or incredible malice, both proving that all institutions are completely broken.
1TruetoThis5dThere is a theory of "the path of least resistance" that implies the one should go with the flow. With that in mind, how do you continue to nurture the growth resulting from challenges? Does the rationale of the path of least resistance conflict with the challenges of life that are required for change?

Sunday, May 31st 2020
Sun, May 31st 2020

Shortform
6lsusr6d[BOOK REVIEW] SURFING UNCERTAINTY Surfing Uncertainty is about predictive coding, the theory in neuroscience that each part of your brain attempts to predict its own inputs. Predictive coding has lots of potential consequences. It could resolve the problem of top-down vs bottom-up processing. It cleanly unifies lots of ideas in psychology. It even has implications for the continuum with autism on one end and schizophrenia on the other. The most promising thing about predictive coding is how it could provide a mathematical formulation for how the human brain works. Mathematical formulations are great because once they let you do things like falsify experiments and simulate things on computers. But while Surfing Uncertainty goes into many of the potential implications of predictive codings, the author never hammers out exactly what "prediction error" means in quantifiable material terms on the neuronal level. This book is a reiteration of the scientific consensus[1] [#fn-idHWYPr2QXqKHyhup-1]. Judging by the total absense of mathematical equations on the Wikipedia page for predictive coding [https://en.wikipedia.org/wiki/Predictive_coding], I suspect the book never defines "prediction error" in mathematically precise terms because no such definition exists. There is no scientific consensus. Perhaps I was disappointed with this book because my expectations were too high. If we could write equations for how the human brain performs predictive processing then we would be significantly closer to building an AGI than where we are right now [https://www.lesswrong.com/s/9FrMpyp7CDSNcNm4n/p/N594EF44CZD2aGkSh]. -------------------------------------------------------------------------------- 1. The book contains 47 pages of scientific citations. ↩︎ [#fnref-idHWYPr2QXqKHyhup-1]

Saturday, May 30th 2020
Sat, May 30th 2020

Shortform
10Ariel Kwiatkowski6dHas anyone tried to work with neural networks predicting the weights of other neural networks? I'm thinking about that in the context of something like subsystem alignment, e.g. in an RL setting where an agent first learns about the environment, and then creates the subagent (by outputting the weights or some embedding of its policy) who actually obtains some reward

Friday, May 29th 2020
Fri, May 29th 2020

Shortform
3Draconarius7dHilbert’s Motel improvement This hotel is 2 star at best, imagine having to pack up your stuff every time the hotel receives a new guest? I’ve decided to fix that. The hotel still has infinite rooms and guests but this time every other room is unoccupied which prepares the hotel for an infinite amount of new visitors without inconveniencing the current residence.
1__nobody8dObservation: It should generally be safe to forbid non-termination when searching for programs/algorithms. In practice, all useful algorithms terminate: If you know that you're dealing with a semi-decidable thing and doing serious work, you'll either (a) add a hard cutoff, or (b) structure the algorithm into a bounded step function and a controller that decides whether or not to run for another step. That transformation is not adding significant overhead size-wise, so you're bound to find a terminating algorithm "near" a non-terminating one! Sure, that slightly changes the interface – it's now allowed to abort with "don't know", but that's a transformation that you likely would have applied anyway. Even if you consider that a drawback, not having to deal with potentially non-terminating programs / being able to use a description format that cannot represent non-terminating forms should more than make up for that. (I just noticed this while thinking about how to best write something in Coq (and deciding on termination by "fuel limit"), after AABoyles' shortform on logical causal isolation [https://www.lesswrong.com/posts/JWeA8PHnRNQYGWw6Q/aaboyles-s-shortform?commentId=P3NmzPzKHpBXFFZbm] with its tragically simple bit-flip search had recently made me think about program enumeration again…)

Thursday, May 28th 2020
Thu, May 28th 2020

Shortform
17Raemon8dI had a very useful conversation with someone about how and why I am rambly. (I rambled a lot in the conversation!). Disclaimer: I am not making much effort to not ramble in this post. A couple takeaways: 1. Working Memory Limits One key problem is that I introduce so many points, subpoints, and subthreads, that I overwhelm people's working memory (where human working memory limits is roughly "4-7 chunks"). It's sort of embarrassing that I didn't concretely think about this before, because I've spent the past year SPECIFICALLY thinking about working memory limits, and how they are the key bottleneck on intellectual progress. So, one new habit I have is "whenever I've introduced more than 6 points to keep track of, stop and and figure out how to condense the working tree of points down to <4. (Ideally, I also keep track of this in advance and word things more simply, or give better signposting for what overall point I'm going to make, or why I'm talking about the things I'm talking about) ... 2. I just don't finish sente I frequently don't finish sentences, whether in person voice or in text (like emails). I've known this for awhile, although I kinda forgot recently. I switch abruptly to a new sentence when I realize the current sentence isn't going to accomplish the thing I want, and I have a Much Shinier Sentence Over Here that seems much more promising. But, people don't understand why I'm making the leap from one half-finished thought to another. So, another simple habit is "make sure to finish my god damn sentences, even if I become disappointed in them halfway through" ... 3. Use Mindful Cognition Tuning to train on *what is easy for people to follow*, as well as to improve the creativity/usefulness of my thoughts. I've always been rambly. But a thing that I think has made me EVEN MORE rambly in the past 2 years is a mindful-thinking-technique, where you notice all of your thoughts on the less-than-a-second level, so that you can notice which tho
7Paul Crowley8dFor the foreseeable future, it seems that anything I might try to say to my UK friends about anything to do with LW-style thinking is going to be met with "but Dominic Cummings". Three separate instances of this in just the last few days.

Load More Days