Wei_Dai

Wei_Dai's Comments

AGIs as populations

but when we’re trying to make claims that a given effect will be pivotal for the entire future of humanity despite whatever efforts people will make when the problem starts becoming more apparent, we need higher standards to get to the part of the logistic curve with non-negligible gradient.

I guess a lot of this comes down to priors and burden of proof. (I guess I have a high prior that making something smarter than human is dangerous unless we know exactly what we're doing including the social/political aspects, and you don't, so you think the burden of proof is on me?) But (1) I did write a bunch of blog posts which are linked to in the second post (maybe you didn't click on that one?) and it would help if you could point out more where you're not convinced, and (2) does the current COVID-19 disaster not make you more pessimistic about "whatever efforts people will make when the problem starts becoming more apparent"?

When you think about the arguments made in your disjunctive post, how hard do you try to imagine each one conditional on the knowledge that the other arguments are false? Are they actually compelling in a world where Eliezer is wrong about intelligence explosions and Paul is wrong about influence-seeking agents?

I think I did? Eliezer being wrong about intelligence explosions just means we live in a world without intelligence explosions, and Paul being wrong about influence-seeking agents just means he (or someone) succeeds in building intent-aligned AGI, right? Many of my "disjunctive" arguments were written specifically with that scenario in mind.

AGIs as populations

For now my epistemic state is: extreme agency is an important component of thee main argument for risk, so all else equal reducing it should reduce risk.

I appreciate the explanation, but this is pretty far from my own epistemic state, which is that arguments for AI risk are highly disjunctive, most types of AGI (not just highly agentic ones) are probably unsafe (i.e., are likely to lead us away from rather than towards a success story), at best probably only a few very specific AGI designs (which may well be agentic if combined with other properties) are both feasible and safe (i.e., can count as success stories), so it doesn't make sense to say that an AGI is "safer" just because it's less agentic.

Having said that, I also believe that most safety work will be done by AGIs, and so I want to remain open-minded to success stories that are beyond my capability to predict.

Getting to an AGI that can safely do human or superhuman level safety work would be a success story in itself, which I labeled "Research Assistant" in my post.

AGIs as populations

I don’t think such work should depend on being related to any specific success story.

The reason I asked was that you talk about "safer" and "less safe" and I wasn't sure if "safer" here should be interpreted as "more likely to let us eventually achieve some success story", or "less likely to cause immediate catastrophe" (or something like that). Sounds like it's the latter?

Maybe I should just ask directly, what you tend to mean when you say "safer"?

AGIs as populations

What success story (or stories) did you have in mind when writing this?

The EMH Aten't Dead

See also the bottom of this comment for a more complete record of my significant (non-EMH) investments.

Tips/tricks/notes on optimizing investments

Pay your monthly bills with margin loans

Instead of maintaining a positive balance in a bank checking account that pays virtually no interest and having to worry about overdrafts, switch your bill payment to a brokerage account that offers low margin rates, and pay your bills "on margin". (Interactive Brokers currently offers 1.55% (for loans <$100k), or negotiate with your current broker (I got 0.75% starting at the first dollar)). Once a while, sell some securities, move money back from a high yield savings account or CD, or get cash from box spread financing, to zero out the margin balance.

Tips/tricks/notes on optimizing investments

I kind of miss the days when I believed in the EMH... Denial of EMH, along with realizing that 100% and 0% are not practical upper and lower bounds for exposure to the market (i.e., there are very cheap ways to short and leverage the market), is making me a lot more anxious (and potentially regretful) about not making the best investment choices. Would be interested in coping tips/strategies from people who have been in this position longer.

(It seems that in general, fewer constraints means more room for regret. See https://www.wsj.com/articles/bill-gates-coronavirus-vaccine-covid-19-11589207803 for example.)

Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis

You don’t have to be smarter than them to exploit them, since they’re optimizing a different goal: keep their customers happy, instead of making maximum money for them.

What trades does this suggest?

Zoom Technologies, Inc. vs. the Efficient Markets Hypothesis

such high variance looks much more obviously like ‘gambling’ or ‘taking on an enormous amount of risk’ than ‘it’s fun and easy to seek out alpha and beat the market’

I know someone else who made the opposite mistake as me and sold their coronavirus puts too early. If you only saw their record, there would be no "high variance". They just made less money than they could have. It seems to me that the correct lesson from both outcomes is that it's possible to beat the market (without putting in so much effort as to make it not worthwhile to try), but we haven't figured out how to time the exits at exactly or very close to the best times.

Load More