Recent Discussion

This is a link to a question asked on the EA Forum by Aryeh Englander. (Please post responses / discussion there.)

Does the following seem like a reasonable brief summary of the key disagreements regarding AI risk?

Among those experts (AI researchers, economists, careful knowledgeable thinkers in general) who appear to be familiar with the arguments:

  • Seems to be broad (but not universal?) agreement that:
    • Superintelligent AI (in some form, perhaps distributed rather than single-agent) is possible and will probably be created one day
    • By default there is at least a decent chance that the AI will not b
... (Read more)
4Mark_Friedenbach16hThere are disagreements over approach (e.g. provably friendly vs. boxed "tool" AI), which I don't see on your list.

Valid. I was primarily summarizing the risk part though, rather than the solutions.

Reply to: Meta-Honesty: Firming Up Honesty Around Its Edge-Cases

Eliezer Yudkowsky, listing advantages of a "wizard's oath" ethical code of "Don't say things that are literally false", writes—

Repeatedly asking yourself of every sentence you say aloud to another person, "Is this statement actually and literally true?", helps you build a skill for navigating out of your internal smog of not-quite-truths.

I mean, that's one hypothesis about the psychological effects of adopting the wizard's code.

A potential problem with this is that human natural language contains a lot of ambiguity. Words can

... (Read more)

On the one hand this post does a great job of connecting to previous work, leaving breadcrumbs and shortening the inferential distance. On the other hand what is this at the end?

But one thing I'm pretty sure won't help much is clever logic puzzles about implausibly sophisticated Nazis.

I have no idea what this is talking about.

13quanticle5hI would say that you should consider yourself fortunate then, that you are living in a situation where most of the people surrounding you have your best interests in mind (or, at worst, are neutral towards your interests). For others in more adversarial situations, telling lies (or at least shading the truth to the extent that would be considered lying by the standards of this post) is a necessary survival skill.
2Viliam2hIn situations where others can hurt you, clever solution like "no comment - because this is the situation where in some counterfactual world I would prefer to be silent" results in you getting hurt. (A few weeks ago, everyone in the company I am working for got a questionaire from management where they were asked to list the strengths and weaknesses of their colleagues. Cleverly refusing to answer, beyond plausible excuses such as "this guy works on a different project so I haven't really interacted with him much", would probably cost me my job, which would be inconvenient in multiple ways. At the same time, I consider this type of request deeply repulsive -- on Monday I am supposed to be a good team member who enjoys cooperation and teambuilding, and on Tuesday I am asked to snitch on my coworkers -- from my perspective this would hurt my personal integrity much more than mere lying. Sorry, I am officially a dummy who never notices a non-trivial weakness in anyone, now go ahead and try proving that I do.) Also, it seems to me that in real world, bulding a prestige of a person who never lies, is more tricky than just never lying and cleverly glomarizing. For example, the prestige you keep building for years can be ruined overnight by a third party lying about you having lied to them. (And conversely, you could actually have a strategy of never lying... except to a designated set of "victims", in situations when there is no record of what you said, and who are sufficiently lower-status that you, so if they choose to accuse you publicly, they will be percieved as liars.)
7Said Achmiz5hFirst, some quick comments: 1. Good post; I mostly agree with all specific points therein. 2. I appreciate that this post has introduced me (via appropriate use of ‘Yudkowskian’ hyperlinking) to several interesting Arbital articles I’d never seen. 3. Relevant old post by Paul Christiano: “If we can’t lie to others, we will lie to ourselves” [https://sideways-view.com/2016/11/26/if-you-cant-lie-to-others-you-must-lie-to-yourself/] . All that having been said, I’d like to note that this entire project of “literal truth”, “wizard’s code”, “not technically lying”, etc., etc., seems to me to be quite wrongheaded. This is because I don’t think that any such approach is ethical in the first place. To the contrary: I think that there are some important categories of situations where lying is entirely permissible (i.e., ethically neutral at worst), and others where lying is, in fact, ethically mandatory (and where it is wrong not to lie). In my view, the virtue of honesty (which I take to be quite important indeed), and any commitment to any supposed “literal truth” or similar policy, are incompatible. Clearly, this view is neither obvious nor likely to be uncontroversial. However, in lieu of (though also in the service of) further elaboration, let me present this ethical question or, if you like, puzzle: Is it ethically mandatory always to behave as if you know all information which you do, in fact, know?
agai's Shortform
11dShow Highlight

I actually think that 2020 could be the year of the Linux desktop

Linux has had the advantages it has for twenty years...so why now?

2Viliam3hIt's called progress. In my youth, we only had a bridge to sell you.

(Cross-posted from Facebook.)

0: Tl;dr.

  • A problem with the obvious-seeming "wizard's code of honesty" aka "never say things that are false" is that it draws on high verbal intelligence and unusually permissive social embeddings. I.e., you can't always say "Fine" to "How are you?" This has always made me feel very uncomfortable about the privilege implicit in recommending that anyone else be more honest.
  • Genuinely consistent Glomarization (i.e., consistently saying "I cannot confirm or deny" whether or not there's anything to concea
... (Read more)

I don’t really stand by the last half of the points abov, I.e. the last ~3rd of the longer review. I think there’s something important to say here about the relationship between common knowledge and deontology, but that I didn’t really say it. I hope to get the time to try again to say it.

14Zack_M_Davis9hReviewReply: "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think" [https://www.lesswrong.com/posts/MN4NRkMw7ggt9587K/firming-up-not-lying-around-its-edge-cases-is-less-broadly]

CFAR recently launched its 2019 fundraiser, and to coincide with that, we wanted to give folks a chance to ask us about our mission, plans, and strategy. Ask any questions you like; we’ll respond to as many as we can from 10am PST on 12/20 until 10am PST the following day (12/21).

Topics that may be interesting include (but are not limited to):

  • Why we think there should be a CFAR;
  • Whether we should change our name to be less general;
  • How running mainline CFAR workshops does/doesn't relate to running "AI Risk for Computer Scientist" type workshops. Why we both do a lot of rec
... (Read more)
1Adam Scholl6hBen just to check, before I respond—would a fair summary of your position here be, "CFAR should write more in public, e.g. on LessWrong, so that A) it can have better feedback loops, and B) more people can benefit from its ideas?"
2mr-hire14hLeadership (as for instance leadership retreats are trying to teach it) is the intersection between management and strategy. Another way to put it, its' the discipline of getting people to do what's best for your organization.

Do you think that Elon doesn't get his employees to do what's best for his companies?

2Ben Pace17hHurrah! :D

This is a cross post from http://250bpm.com/blog:128.

Introduction

In the past I've reviewed Eliezer Yudkowsky's "Inadequate Equilibria" book. My main complaint was that while it explains the problem of suboptimal Nash equilibria very well, it doesn't propose any solutions. Instead, it says that we should be aware of such coordination failures and we should expect ourselves to fare better than the official institutions in such cases. What Yudkowsky is saying (if I understand him correctly) is that given that the treatment of short bowel syndrome in babies is stuck in an inadequate eq... (Read more)

This essay provides some fascinating case studies and insights about coordination problems and their solutions, from a book by Elinor Ostrom. Coordination problems are a major theme in LessWrongian thinking (for good reasons) and the essay is a valuable addition to the discussion. I especially liked the 8 features of sustainable governance systems (although I wish we got a little more explanation for "nested enterprises").

However, I think that the dichotomy between "absolutism (bad)" and "organically grown institutions (good)" that the essay creates needs

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

The recent adversarial collaboration on spiritual experiences on Slate Star Codex includes this paragraph:

It was also discovered that people in the United States, Australia, the United Kingdom, and Scandinavia do not tend to share their spiritual experiences with others. Hood et al. wonder if this is why such spiritual experiences are thought to be uncommon (as fewer people in these societies might have heard reports of others’ spiritual experiences).

This naturally lead me to wonder, what spiritual experiences have LessWrong readers have that they are willing to share, since the readers... (Read more)

3Answer by DanielFilan9hI once laid down on the floor of an empty bedroom, went through thinking of every thing and/or person and/or group of people I could think of, and thought about how excellent/beautiful/fitting they were, for something like an hour (not on purpose, it just sort of happened).
5G Gordon Worley III10hThread for mentioning past LessWrong posts that describe or mention what might qualify as spiritual experiences. One comes immediately to my mind: Val's " Kensho [https://www.lesswrong.com/posts/tMhEv28KJYWsu6Wdo/kensh]".

I believe I've had kesho experiences too. This easily meets the criteria of "spiritual experience" and "mystical perception", though it has no hallucinatory component.

Defining "Antimeme"
101d1 min readShow Highlight

An antimeme is a meme with the following three characteristics:

  • Learning it threatens the egos and identities of adherants to the mainstream of a culture[1].
  • Learning the meme renders mainstream knowledge in the field unimportant by broadening the problem space of a knowledge domain, usually by increasing the dimensionality.
  • Mainstream wisdom considers detailed knowledge of the antimeme irrelevant, unimportant or low priority. Mainstream culture may just ignore the antimeme altogether instead.

I call these "antimemes" because they exhibit behavior opposite that of regular memes. The typical

... (Read more)
6Isnasene13hBecause cultures are nested within one-another, it's interesting to posit that anti-memes can have their own anti-memes. For instance ethically-motivated vegetarianism is an anti-meme for (most) meat-eaters but wild animal suffering is an anti-meme for (most) ethically-motivated vegetarians. Also note that the anti-meme of an anti-meme tends not to be a meme. This is a matter of dynamics. Since the meme culture is the default, a culture bonded to an anti-meme may only exist when the meme culture has not developed a way to dissolve the anti-meme. Thus, anti-memes for cultures bonded to anti-memes must be viewed as useless from the perspective of the meme-culture. Otherwise, the meme-culture would just use the anti-anti-meme to dissolve the anti-meme. Wild animal suffering is a good example of this. Even though people periodically bring up wild animal suffering caused by plant farming as a talking point against ethical vegetarianism, actually taking wild animal suffering seriously would be far more corrosive to the meme-culture than ethical vegetarianism (the anti-meme culture) would be. I also think some anti-memes might also be culture-generic. For instance, utilitarianism ideology looks a lot like the anti-meme for pro-social behavior. Even if utilitarianism is discussed relatively frequently (and periodically does get attacked as wrong), it checks all the boxes in practice: Utiliarianism, roughly speaking, equates saving the life of someone next door with saving the life of someone far away (which can easily be achieved relatively cheaply). This radically re-orients how moral virtue (ie egos and identities) would be assigned. Utilitarianism dramatically reduces the moral importance of being involved in your local community by broadening the problem of morality to people far away who need way more help. Moral circle expansion (in the sense of considering animals more seriously as moral patients) also does this and even renders local communities unimportant

I hadn't noticed utilitarianism and ethical vegetarianism check these boxes. I wrote this series hoping for exactly this kind of insight. Thanks!

Your comment on the cross-cultural application of utilitarianism makes this extra insightful. I have edited the original post to acknowledge that antimemes are not always culture-specific.

3Dustin14hThis seems...iffy.

To celebrate all the possibilities of humanity during these holidays, have a possible calendar of the year 12020 of the human era (link to full calendar here).

Minor fact: in the Gregorian calendar, the days of the week cycle exactly every 400 years, so the non-time-travellers among you can use this for 2020 as well...

(previous holiday specials can be found here and here)

Do you happen to be making a reference to the Holocene calendar? (Which was popularized by this Kurzgesagt video.) It advocates that we reset the zero-year to be 10k years older, thereby set before most of human civilization.

Values Assimilation Premortem
162d4 min readShow Highlight

In the past 3-4 years, I went through a prolonged and painful life crisis in which I systematically deconstructed my existing worldview and slowly moved away from Evangelical Christianity into something Rationalist or Rationalist-adjacent. In the past 4 months, I've started hanging around the Berkeley Rationality community and am now dating someone embedded therein. At this point my partner is still my main connection to the specific values and practices of the community, and given that my worldview is currently being fleshed-out, she has an outsized influence on what my future beliefs and val

... (Read more)

Thanks for the welcome!

This is super helpful. It sounds like you've lived the thing that I'm only hypothesizing about here. Hopefully "Can't wait for round three" isn't sarcastic. This first round for me was extremely painful, but it sounds like round 2 was possibly more pleasant for you.

I like the framework you're using now, and I'm gonna try to condense it into my own words to make sure I understand what you mean. Basically, you're trying to optimize around keeping the various and conflicting hopes, needs, fears, etc. within you at least relatively cool

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post
2wolverdude9hThanks for the tips! I suppose that large portions of The Sequences are devoted to precisely the task of critiquing arguments without requiring a contrary position. It's kind of an extension of a logical syntax check, but the question isn't just whether it's complete and deductively sound, but also whether it's empirically sound and bayesianly sound. It's gonna take me a while to master those techniques, but it's a worthy goal. Not 100% sure I can do it on the timeline I need, but I can at least practice and start developing the habits. I love reading about failure modes! Not sure why I find it so fascinating. Maybe it's connected to the perfectionism? Speaking of... I consider my greatest failure in life to be that I haven't failed enough. I have too few experiences of what works and what doesn't, I failed to make critical course-corrections because they lay outside my info bubble, and I missed out on many positive life experiences along with the negative ones.
What are you reading?
54d1 min readShow Highlight

In my short-form, I write:

[...] This is way more obvious and way more clear in Inadequate Equilibria. Take a problem, a question and deconstruct it completely. It was concise and to the point, I think it's one of the best things Eliezer has written; I cannot recommend it enough.

Just finished Inadequate Equilibria. Now, I'm reading:

  • The Big Picture from Sean Carroll (which seems a really, really good companion to The Sequences.) I'm at chapter 17/50, and I'm really enjoying it so far; it's an ambitious book though!
  • In fiction I picked up UNSONG from Scott Alexander; I a
... (Read more)

Edited above comment with fuller details :)

LessWrong is currently reviewing the posts from 2018, and I'm trying to figure out how voting should happen. The new hotness that all your friends are talking about is quadratic voting, and after thinking about it for a few hours, it seems like a pretty good solution to me.

I'm writing this post primarily for people who know more about this stuff to show me where the plan will fail terribly for LW, to suggest UI improvements, or to suggest an alternative plan. If nothing serious is raised that changes my mind in the next 7 days, we'll build a straightforward UI and do it.

I'

... (Read more)
9Ben Pace17hThis all makes a lot of sense, I'm glad to hear you say it. I think that the option for 'score voting style' is quite good, we in fact were seriously considering doing something like that. I really like the idea of producing a visualisation as the user makes their votes up. That sounds delightful. Yeah. As I understand is, this just means that you sum the squares of the SV and QV votes, then linearly scale all the votes of one such that these two numbers are equal to one another. And then you've got them on the same playing field. And this is a trivial bit of computation, so we can make it that if you're voting in SV but then want to move to QV to change the weights a little, when you change we can automatically show you what the score looks like in QV (er, rounded, there'll be tons of fractions by default). Instant Runoff seems to be optimising for outcomes about which the majority have consensus, which isn't something I care as much about in this situation. That said I don't fully understand how it would change the results.
3Jameson Quinn12h... such that the average for each of these numbers are equal, yes. I think that the way you said it, you'd be upscaling whichever group had fewer voters, but I'm pretty sure you didn't mean that. E Pluribus Hugo [http://www.thehugoawards.org/the-voting-system/], and more generally, proportional representation, have nothing to do with Instant Runoff, so I'm not sure what you're saying here.

The second paragraph in the linked post says:

Many people find the Hugo voting system (called “Instant Runoff Voting“) very complicated.
9Raemon18hThis is similar to what I was personally imagining, and what I think I'd personally want. When I went through the 75 posts myself, imagining voting for them, what I found was that I basically wanted to put each post into one of a few buckets, something like: 1. "no" – not a contender for book 2. "decent" – a pretty neat idea, or a 'quite good' idea that wasn't well argued for 3. "quite good" – some combination of "the idea is quite important; or, the conversation moved forward significantly; or, a neat idea was extraordinarily well argued for with excellent epistemics" 4. "crucial" – this is a foundational piece that I hope one day becomes 'canon' (I could imagine wanting to downvote posts, but in this case there weren't any I wanted to rank lower than 'no') One additional thing I kinda wanted out of this the ability to flag (and aggregate data) about which posts had better or worse epistemic virtue. At first I thought of having two different voting scales, one for "value" and the other for "is this literally true, and/or did the author demonstrate thoughtfulness in how they considered the idea?" I was worried about the obvious failure mode, where e.g OkCupid creates a "personality" and "attractiveness" scale, but it turns out the halo effect swamps any additional information you might have gleaned, and the two scales mapped perfectly. When I attempted to rate each post myself, what I found was I almost always ranked epistemics and importance the same (or at least it wasn't obvious that they were more than "1 point" away from each other on a 1-10 scale), but that were a few specific posts I wanted to flag as "punching above or below their weight epistemically." I'm not quite sure if this is worth any additional complexity. A simple option is to leave a "comments" box for each post where people can explain their vote in plain english. I'm a little sad that doesn't give us the ability to aggregate information though. (A simple boolean, er, three
Naming Rooms
811h1 min readShow Highlight

Growing up, the bedrooms in the house had clear names: Jeff's room, Rose's room, Alice's room, Rick and Suzie's room, the Au Pair room, and the guest room. But people have moved around a lot: later occupants of "my" room have included Rose, Stevie, then later me, Julia, and Lily, and then even later Alice, Alex, and their children. Other rooms had a similar range of people rotating through ("the Wyman St home for itinerant folk-dancing youth") and referring to rooms became really difficult.

Around this time last year we decided to give names to the rooms: England, Scotland, Wales, and ... (Read more)

For what it's worth, I tried something like the "I won't let the world be destroyed"->"I want to make sure the world keeps doing awesome stuff" reframing back in the day and it broadly didn't work. This had less to do with cautious/uncautious behavior and more to do with status quo bias. Saying "I won't let the world be destroyed" treats "the world being destroyed" as an event that deviates from the status quo of the world existing. In contrast, saying "There's so much fun we could ha... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Humans Are Embedded Agents TooΩ
554d5 min readΩ 17Show Highlight

Most models of agency (in game theory, decision theory, etc) implicitly assume that the agent is separate from the environment - there is a “Cartesian boundary” between agent and environment. The embedded agency sequence goes through a long list of theoretical/conceptual problems which arise when an agent is instead embedded in its environment. Some examples:

  • No defined/input output channels over which to optimize
  • Agent might accidentally self-modify, e.g. drop a rock on its head
  • Agent might intentionally self-modify, e.g. change its own source code
  • Hard to define hypotheticals which
... (Read more)
8johnswentworth17hYes and no. I do think you're pointing to the right problems - basically the same problems Shminux was pointing at in his comment, and the same problems which I think are the most promising entry point to progress on embedded agency in general. That said, I think "word boundaries" is a very misleading label for this class of problems. It suggests that the problem is something like "draw a boundary around points in thing-space which correspond to the word 'tree'", except for concepts like "values" or "person" rather than "tree". Drawing a boundary in thing-space isn't really the objective here; the problem is that we don't know what the right parameterization of thing-space is or whether that's even the right framework for grounding these concepts at all. Here's how I'd pose it. Over the course of history, humans have figured out how to translate various human intuitions into formal (i.e. mathematical) models. For instance: * Game theory gave a framework for translating intuitions about "strategic behavior" into math * Information theory gave a framework for translating intuitions about information into math * More recently, work on causality gave a framework for translating intuitions about counterfactuals into math * In the early days, people like Galileo showed how to translate physical intuitions into math A good heuristic: if a class of intuitive reasoning is useful and effective in practice, then there's probably some framework which would let us translate those intuitions into math. In the case of embedded-agency-related problems, we don't yet have the framework - just the intuitions. With that in mind, I'd pose the problem as: build a framework for translating intuitions about "values", "people", etc into math. That's what we mean by the question "what is X?".

Ooh, that is very insightful. The word-boundary problem around "values" feels fuzzy and ill-defined, but that doesn't mean that the thing we care about is actually fuzzy and ill-defined.

4G Gordon Worley III17hI agree and think this is an unappreciated idea, which is why I liberally link the embedded agency post in things I write. I'm not sure I'm doing a perfect job of not forgetting we are all embedded, but I consider it important and essential to not getting confused about, for example, human values, and think many of the confusions we have (especially the ones we fail to notice) are a result of incorrectly thinking, to put in another way, that the map does not also reside in the territory.

An introduction to a recent paper by myself and Ryan Carey. Cross-posting from Medium.


For some intellectual tasks, it’s easy to define success but hard to evaluate decisions as they’re happening. For example, we can easily tell which Go player has won, but it can be hard to know the quality of a move until the game is almost over. AI works well for these kinds of tasks, because we can simply define success and get an AI system to pursue it as best it can.

For other tasks, it’s hard to define success, but relatively easy to judge solutions when we see them, for example, doing a backflip. Getti

... (Read more)

This looks really interesting to me. I remember when the Safety via Debate paper originally came out; I was quite curious to see more work around modeling debate environments and getting a better sense on how well we should expect it to perform in what kinds of situations. From what I can tell this does a rigorous attempt at 1-2 models.

I noticed that this is more intense mathematically than most other papers I'm used to in this area. I started going through it but was a bit intimidated. I was wondering if you may suggest tips for reading through it and und

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Forgive me if some of this is repetitive, I can’t remember what I’ve written in which draft and what’s actually been published, much less tell what’s actually novel. Eventually there will be a polished master post describing my overall note taking method and leaving out most of how it was developed, but it also feels useful to discuss the journey.

When I started taking notes in Roam (a workflowy/wiki hybrid), I would:

  1. Create a page for the book (called a Source page), with some information like author and subject (example)
  2. Record every claim the book made on that Source page
  3. Tag each claim so
... (Read more)

Just realized the "it" in "I'm curious what it looks like." probably referred to "my DB", not "the feedback". I'd love to either user test my DB on you (you play with it while I watch) or have you beta test the description I'm writing, if you're interested.

2Elizabeth18hSee here [https://www.lesswrong.com/posts/PtcPKkxkJLu4QRTfY/epistemic-spot-check-unconditional-parenting#L7v6EHokEr5s7KrSA] and here [https://acesounderglass.com/2019/11/10/roam/comment-page-1/#comment-1308] for responses. One of those was in response to a book that did better on the "having a thesis" axis than "having evidence", so I don't think that's the problem. It seems plausible having a guide will help people, and that's on my list, but I'm aiming for a high level of polish so it's unfinished.

 

Quick context: Epistemic spot checks started as a process in which I did quick investigations a few of a book’s early claims to see if it was trustworthy before continuing to read it, in order to avoid wasting time on books that would teach me wrong things. Epistemic spot checks worked well enough for catching obvious flaws (*cou*Carol Dweck*ugh*), but have a number of problems. They emphasize a trust/don’t trust binary over model building, and provability over importance. They don’t handle “severely flawed but deeply insightful” well at all. So I started trying to create something better

Be... (Read more)

9Elizabeth17hI had a pretty visceral negative response to this, and it took me a bit to figure out why. What I'm moving towards with ESCs is no gods no proxies. It's about digging in deeply to get to the truth. Throwing a million variables at a wall to see what sticks seems... dissociated? It's a search for things you do instead of dig for information you evaluate yourself.
3Liam Donovan17hWhat's the difference between John's suggestion and amplifying ESCs with prediction markets? (not rhetorical)

I don't immediately see how they're related. Are you thinking people participating in the markets are answering based on proxies rather than truly relevant information?

10Ben Pace17h"No Gods, No Proxies, Just Digging For Truth" is a good tagline for your blog.

Continuation ofNo Individual Particles
Followup toThe Generalized Anti-Zombie Principle

Suppose I take two atoms of helium-4 in a balloon, and swap their locations via teleportation.  I don't move them through the intervening space; I just click my fingers and cause them to swap places.  Afterward, the balloon looks just the same, but two of the helium atoms have exchanged positions.

Now, did that scenario seem to make sense?  Can you imagine it happening?

If you looked at that and said, "The operation of swapping two helium-4 atoms produces an identical configuratio... (Read more)

This "explanation" leaves lingering doubt. It doesn't dissolve all the questions that I have about personal identity. Ok, I'm a factor in a subspace of an amplitude distribution: I get that and I'm okay with that. But there are still unresolved issues of anticipation.

Let's say I record in sufficient fidelity the amplitude distribution factor which represents "me" at this point in time. Then after I am dead some machine is used to recreate this amplitude distribution to sufficient fidelity as to re-create me, as I exist now. That person will come into being

... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Off-topic riff on "Humans are Embedded Agents Too"

One class of insights that come with Buddhist practice might be summarized as "determinism", as in, the universe does what it is going to do no matter what the illusory self predicts. Related to this is the larger Buddhist notion of "dependent origination", that everything (in the Hubble volume you find yourself in) is causally linked. This deep deterministic interdependence of the world is hard to appreciate from our subjective experience, because the creation of ontology crea... (Read more)(Click to expand thread. ⌘/CTRL+F to Expand All)Cmd/Ctrl F to expand all comments on this post

Load More