Concepts Portal

This page displays the concepts which are foci of LessWrong discussion. 

The page has three sections:

  •     Tag Portal - manually curated, structured tags
  •     Tag Details - tags with descriptions and top posts
  •     Tags List - alphabetical list of all existing tags

 

The Library | Tag Activity Page | Tagging FAQ | Open Call for Taggers


RATIONALITY

Theory / Concepts

Anticipated Experiences
Bayes Theorem
Bounded Rationality
Conservation of Expected
Contrarianism
Decision Theory
Epistemology
Game Theory
Gears-Level
Hansonian Pre-Rationality
Law-Thinking
Newcomb's Problem
Occam's razor
Robust Agents
Solomonoff Induction
Truth, Semantics, & Meaning
Utility Functions
 

Applied Topics

Alief
Betting
Cached Thoughts
Calibration
Dark Arts
Empiricism
Epistemic Modesty
Forecasting & Prediction
Group Rationality
Identity
Inside/Outside View
Introspection
Intuition
Practice & Philosophy of Science
Scholarship & Learning
Value of Information
 

Failure Modes

Affect Heuristic
Bucket Errors
Compartmentalization
Confirmation Bias
Fallacies
Goodhart’s Law
Groupthink
Heuristics and Biases
Mind Projection Fallacy
Motivated Reasoning
Pica
Pitfalls of Rationality
Rationalization 
Self-Deception
Sunk-Cost Fallacy

Communication

Common Knowledge
Conversation
Decoupling vs Contextualizing
Disagreement
Double-Crux
Good Explanations (Advice)
Ideological Turing Tests
Inferential Distance
Information Cascades
Memetic Immune System
Philosophy of Language
Steelmanning

Techniques

Double-Crux
Focusing
Goal Factoring
Internal Double Crux
Hamming Questions
Noticing
Techniques
Trigger Action Planning/Patterns

Models of the Mind

Consciousness
Dual Process Theory (System 1 & 2)
General Intelligence
Subagents
Predictive Processing
Perceptual Control Theory
 

Other

Center for Applied Rationality
Curiosity
Rationality Quotes
Updated Beliefs (examples of)

 

ARTIFICIAL INTELLIGENCE

Basic Alignment Theory

AIXI
Complexity of Value
Corrigibility
Decision Theory
Embedded Agency
Fixed Point Theorems
Goodhart's Law
Inner Alignment
Instrumental Convergence
Logical Induction
Logical Uncertainty
Mesa-Optimization
Myopia
Newcomb's Problem
Optimization
Orthogonality Thesis
Outer Alignment
Solomonoff Induction
Utility Functions

Engineering Alignment

AI Boxing (Containment)
Debate
Factored Cognition
Humans Consulting HCH
Impact Measures
Inverse Reinforcement Learning
Iterated Amplification
Transparency / Interpretability
Value Learning
 

Organizations

CHAI (UC Berkeley)
FHI (Oxford)
MIRI
OpenAI
Ought

Strategy

AI Risk
AI Services (CAIS)
AI Takeoff
AI Timelines

 Other

Alpha-
GPT
Research Agendas 

 

WORLD MODELING

Mathematical Sciences

Anthropics
Category Theory
Causality
Game Theory
Decision Theory
Logic & Mathematics
Probability & Statistics

Specifics
Prisoner's Dilemma
 

General Science & Eng

Machine Learning
Nanotechnology
Physics
Programming
Space Exploration & Colonization

Specifics
The Great Filter

Meta / Misc

Academic Papers
Book Reviews
Fact Posts
Research Agendas
Scholarship & Learning

Social & Economic

Economics
Financial Investing
History
Politics
Progress Studies
Social and Cultural Dynamics

Specifics
Conflict vs Mistake Theory
Cost Disease
Efficient Market Hypothesis
Industrial Revolution
Moral Mazes
Signaling
Social Reality
Social Status

Biological & Psychological

Aging
Biology
Consciousness
Evolution
Evolutionary Psychology
Medicine
Neuroscience
Qualia

Specifics
Coronavirus
General Intelligence
IQ / g-factor

The Practice of Modeling

Epistemic Review
Expertise
Gears-Level Models
Falsifiability
Forecasting & Prediction
Forecasts (Lists of)
Inside/Outside View
Jargon (meta)
Practice and Philosophy of Science
Prediction Markets
Replicability
 

 

WORLD OPTIMIZATION

Moral Theory

Altruism
Consequentialism
Deontology
Ethics & Morality
Metaethics
Moral Uncertainty

 

 

Causes / Interventions

Aging
Animal Welfare
Existential Risk
Mind Uploading
Life Extension
S-risks
Transhumanism
Voting Theory

Working with Humans

Coalitional Instincts
Common Knowledge
Coordination / Cooperation
Game Theory
Group Rationality
Institution Design
Moloch
Signaling
Social Status
Simulacrum Levels

Applied Topics

Blackmail
Chesterton's Fence
Deception
Honesty
Hypocrisy
Information Hazards
Meta-Honesty
Pascal's Mugging

Value

Ambition
Art
Aesthetics
Complexity of Value
Suffering
Superstimuli
Wireheading

Meta

Cause Prioritization
Center for Long-term Risk
Effective Altruism
Heroic Responsibility
 

 

PRACTICAL

Skills & Techniques

Circling
Communication Cultures
Conversation (topic)
Cryonics
Goal Factoring
Exercise (Physical)
Financial Investing
Hamming Questions
Life Improvements
Meditation
More Dakka
Parenting
Planning & Decision-Making
Relationships (Interpersonal)
Self Experimentation
Skill Building
Spaced Repetition
Virtues (Instrumental)

Well-being

Careers
Emotions
Gratitude
Happiness
Slack
Sleep
Well-being

Productivity

Akrasia
Motivations
Prioritization
Procrastination
Productivity
Willpower

Other
Software Tools

 

COMMUNITY

All

Bounties (active)
Grants & Fundraising
Growth Stories
Online Socialization
Petrov Day
Public Discourse
Research Agendas
Ritual
Solstice Celebration
 

LessWrong

Site Meta
GreaterWrong Meta
LessWrong Events
LW Moderation
Meetups (topic)
Moderation (topic)
The SF Bay Area
Tagging

 

OTHER

Content-Type

Art
Checklists
Eldritch Analogies
Exercises / Problem-Sets
Humor
Fiction
Open Problems
Paradoxes
Poetry
Postmortems & Retrospectives
Summaries

Format

Book Reviews
Interviews
List of Links
Newsletters
Open Thread
Q&A (format)
Surveys
Transcripts

Cross-Category

Cooking
Education
Narratives (stories)
Religion
Writing

 

Miscellaneous

Fiction (topic)
Gaming (videogames/tabletop)

Tag Details

No Filters

Rationality is the art of thinking in ways that result in accurate beliefs and good decisions. It is the primary topic of LessWrong.

Rationality is not only about avoiding the vices of self-deception and obfuscation, but also about the virtue of curiosity, seeing the world more clearly than before, and achieving things previously unreachable to you. The study of rationality on LessWrong includes a theoretical understanding of ideal cognitive algorithms, as well as building a practice that uses these idealized algorithms to inform heuristics, habits, and techniques, to successfully reason and make decisions in the real world.

Edit

Artificial Intelligence is the study of creating intelligence in algorithms. On LessWrong, the primary focus of AI discussion is to ensure that as humanity builds increasingly powerful AI systems, the outcome will be good. The central concern is that a powerful enough AI, if not designed and implemented with sufficient understanding, would optimize something unintended by its creators and pose an existential threat to the future of humanity. This is known as the AI alignment problem.

Edit

World Modeling is getting curious about how the world works. It’s diving into wikipedia, it’s running a survey to get data from your friends, it’s dropping balls from different heights and measuring how long they take to fall. Empiricism, scholarship, googling, introspection, data-gathering, science. Applying your epistemology and curiosity, finding out how the damn thing works, and writing it down for the rest of us.

Edit

World Optimization is the full use of our agency. It is extending the reach of human civilization. It is building cities and democracies and economic systems and computers and flight and science and space rockets and the internet. World optimization is about adding to that list. 

But it’s not just about growth, it’s also about preservation. We are still in the dawn of civilization, with most of civilization in the billions of years ahead. We mustn’t let this light go out.

Edit

Practical posts give direct, actionable advice on how to achieve goals and generally succeed. The art of rationality would be useless if it did not connect to the real world; we must take our ideas and abstractions and collide them with reality. Many places on the internet will give you advice; here, we value survey data, literature reviews, self-blinded trials, quantitative estimates, and theoretical models that aim to explain the phenomena.

Edit

The 2019 Novel Coronavirus (aka COVID-19, SARS-CoV-2) is a pandemic sweeping the world.

Edit

The LessWrong Community is the people who write on LessWrong and who contribute to its mission of refining the art of human rationality. This tag includes community events, analysis of the health, norms and direction of the community, and space to understand communities in general.

LessWrong also has many brothers and sisters like the Berkeley Rationality Community, SlateStarCodex, Effective Altruism, AI Alignment, and more, who participate here. To see upcoming LessWrong events, go to the community section.

Edit

Site Meta is the category for discussion about LessWrong website. It includes technical updates. It applies to team announcements such as updates, features, events, moderation activity and policy, downtime, requests for feedback, as well as site documentation,  and the team’s writings about site philosophy/strategic thinking.

Edit

Open Threads are informal discussion areas, where users are welcome to post comments that didn't quite feel big enough to warrant a top-level post, nor fit in other posts.

Edit

Fiction isn't literal truth, but when done well it captures truths and intuitions that are difficult to explain directly. (It’s also damn fun to read.)

Edit

Book Reviews on LessWrong are different from normal book reviews; they summarize and respond to a book's core ideas first, and judge whether you should read it second. A good book review sometimes distills the book's ideas so well that you no longer need to read the book.

Edit

Decision theory is the study of principles and algorithms for making correct decisions—that is, decisions that allow an agent to achieve better outcomes with respect to its goals [1].

Edit

Newsletters are collected summaries of recent events, posts, and academic papers.

Edit

AI Risk is analysis of the risks associated with building powerful AI systems.

Edit

Scholarship & Learning. Here be posts on how to study, research, and learn.

Edit

Anthropics is the study of how the fact that we succeed in making observations of a given kind at all gives us evidence about the world we are living, independently of the content of the observations. As an example, for living beings, making any observations at all is only possible in a universe with physical laws that support life.

Edit

The Machine Intelligence Research Institute, formerly known as the Singularity Institute for Artificial Intelligence (not to be confused with Singularity University).

Edit

All Tags (353)

AI (666)
AIXI (24)
Aging (22)
Alief (12)
Art (25)
GPT (53)
Humor (54)
Slack (19)
Sleep (12)
War (13)