User Profile

2177Ω42862278

rohinmshah's Posts

sorted by new
All Posts
Frontpage
Curated
Questions
Events
Meta

Alignment Newsletter #45

254dΩ 9Show Highlight
2

[Link] Learning preferences by looking at the world

455d6 min readΩ 14Show Highlight
10

Alignment Newsletter #44

2011dΩ 8Show Highlight
0

Conclusion to the sequence on value learning

4414dΩ 13Show Highlight
13

Alignment Newsletter #43

1519dΩ 6Show Highlight
0

Future directions for narrow value learning

1223dΩ 7Show Highlight
4

The human side of interaction

1624dΩ 6Show Highlight
2

Alignment Newsletter #42

211moΩ 10Show Highlight
1

Following human norms

231moΩ 9Show Highlight
8

Reward uncertainty

181moΩ 6Show Highlight
0

rohinmshah's Comments