Episodes
  • LW - Enriched tab is now the default LW Frontpage experience for logged-in users by Ruby
    Jun 23 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Enriched tab is now the default LW Frontpage experience for logged-in users, published by Ruby on June 23, 2024 on LessWrong. In the past few months, the LessWrong team has been making use of the latest AI tools (given that they unfortunately exist[1]) for art, music, and deciding what we should all be reading. Our experiments with the latter, i.e. the algorithm that chooses which posts to show on the frontpage, has produced results sufficiently good that at least for now, we're making Enriched the default for logged-in users[2]. If you're logged in and you've never switched tabs before, you'll now be on the Enriched tab. (If you don't have an account, making one takes 10 seconds.) To recap, here are the currently available tabs (subject to change): Latest: 100% post from the Latest algorithm (using karma and post age to sort[3]) Enriched (new default): 50% posts from the Latest algorithm, 50% posts from the recommendations engine Recommended: 100% posts from the recommendations engine, choosing posts specifically for you based on your history Subscribed: a feed of posts and comments from users you have explicitly followed Bookmarks: this tab appears if you have bookmarked any posts Note that posts which are the result of the recommendation engine have a sparkle icon after the title (on desktop, space permitting): Posts from the last 48 hours have their age bolded: Why make Enriched the default? To quote from my earlier post about frontpage recommendation experiments: A core value of LessWrong is to be timeless and not news-driven. However, the central algorithm by which attention allocation happens on the site is the Hacker News algorithm[2], which basically only shows you things that were posted recently, and creates a strong incentive for discussion to always be centered around the latest content. This seems very sad to me. When a new user shows up on LessWrong, it seems extremely unlikely that the most important posts for them to read were all written within the last week or two. I do really like the simplicity and predictability of the Hacker News algorithm. More karma means more visibility, older means less visibility. Very simple. When I vote, I basically know the full effect this has on what is shown to other users or to myself. But I think the cost of that simplicity has become too high, especially as older content makes up a larger and larger fraction of the best content on the site, and people have been becoming ever more specialized in the research and articles they publish on the site. We found that a hybrid posts list of 50% Latest and 50% Recommended lets us get the benefits of each algorithm[4]. The Latest component of the list allows people to stay up to date with the most recent new content, provides predictable visibility for new posts, and is approximately universal in that everyone sees those posts which makes posts a bit more common-knowledge-y. The Recommended component of the list allows us to present content that's predicted to be most interesting/valuable to a user from across thousands of posts from the last 10+ years, not being limited to just recent stuff. Shifting the age of posts When we first implemented recommendations, they were very recency biased. My guess is that's because the data we were feeding it was of people reading and voting on recent posts, so it knew those were the ones we liked. In a manner less elegant than I would have prefered, we constrained the algorithm to mostly serving content 30 or 365 days older. You can see the evolution of the recommendation engine, on the age dimension, here: I give more detailed thoughts about what we found in the course of developing our recommendation algorithm in this comment below. Feedback, please Although we're making Enriched the general default, this feature direction is still expe...
    Show more Show less
    5 mins
  • LW - Bed Time Quests & Dinner Games for 3-5 year olds by Gunnar Zarncke
    Jun 23 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Bed Time Quests & Dinner Games for 3-5 year olds, published by Gunnar Zarncke on June 23, 2024 on LessWrong. I like these games because they are playful, engage the child and still achieve the objective of getting the child to bed/eat dinner etc. Requires creativity and some slack. Excerpt from Shohannah's post: Recently I had the bright idea to give up on being a regular parent. Mostly cause regular parenting practices melt my brain. But then I wondered … does it have to be boring? [...] But no. It's all culture and it's all recent culture and you can decide to do Something Else Instead. Really. So as someone who craves mental stimulation above the pay grade of the 3 to 5 revolutions around the sun my daughters have managed so far … I figured I'd just make up New Rules. All the time. So far we've been going for two weeks and the main areas are bedtime routines for my eldest (5) and dinner games for all of us (5, 3, and myself). I noticed I seem to have an easy time generating new and odd rule sets every day, and then started wondering if maybe more parents would enjoy this type of variety in their childcare routines and would want to tap into some of the ideas I've been coming up with. So in case that's you, here is what I've found so far! [...] Magic Time Kiddo is the parent. You are the kiddo. Except, the kiddo is still bringing themselves to bed and not you. They get to tell you what to do and take care of you. You will have to listen. I completely recommend performing a lot of obstructive behavior and misunderstanding basic instructions. This was one of the most popular games and may show some insight into how your child would prefer to be parented, or feels about your parenting. and fourteen games/rulesets more. Thanks for listening. To help us out with The Nonlinear Library or to learn more, please visit nonlinear.org
    Show more Show less
    2 mins
  • EA - Impartialist Sentientism and Existential Anxiety about Moral Circle Explosion by Rafael Ruiz
    Jun 22 2024
    Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Impartialist Sentientism and Existential Anxiety about Moral Circle Explosion, published by Rafael Ruiz on June 22, 2024 on The Effective Altruism Forum. Background: I'm currently writing a PhD in moral philosophy on the topic of moral progress and moral circle expansion (you can read a bit about it here and here). I recently was at three EA-related conferences: EA Global London 2024, a workshop on AI, Animals, and Digital Minds at the London School of Economics, and the Oxford Workshop on Global Priorities Research. Particularly this year, I learned a lot and was mostly shaken up by the topics of invertebrate welfare and AI sentience, which are the focus of this post. I warn you, the ethics are going to get weird. Introduction. This is post is a combination of academic philosophy and a more personal reflection on the anxious, dizzying existential crisis I'm facing recently, due to the weight of suffering that I believe might be contained in the universe, which I believe is morally and overwhelmingly important, and for which I feel mostly powerless about. It is a mix of philosophical ideas and a personal, subjective experience trying to digest very radical changes to my view of the world. The post will be as follows. First, I explain my moral view: classical utilitarianism, which you could also call impartialist sentientism. Then I broaden and weaken the view, and explain that even if you don't subscribe to this moral view, you should still worry about these things. The next section very briefly explains the empirical case for invertebrate and AI sentience. Although I'm riddled with uncertainty, I think it is likely that some invertebrate species are sentient, and that future AIs could be sentient. This combines in weird ways with moral impartiality and longtermism, since I don't think it matters where and when this happens, they're still morally weighty. This worry then explodes into moral concern about very weird topics: astronomical numbers of alien invertebrate welfare across the universe, and astronomical numbers of sentient AI developed by either us in the future or aliens, and other very weird topics that I find hard to grasp. I talk about these topics in a more personal way, particularly ideas of alienation and anxiety over these topics. Then I worry about potential avenues of where this "train to crazy town" is headed, such as: blowing up the universe, extreme moral offsetting by spreading certain invertebrate species across the universe, tiling the universe with hedonium, infinite ethics, and worries about Pascal's Mugging. Then I take a step back, compose myself, and get back to work. How should these ideas change how we ought to act, in terms of charitable donations, or lines of research and work? What are some actionable points? Finally, I recommend some readings if you're interested in these topics, since there's a lot of research coming up on this topic. I conclude that the future of normative ethics will be very weird. Impartialist Sentientism. Currently, the moral theory I place the highest credence on is total hedonist utilitarianism (what you would call classical, Singerite, or Benthamite utilitarianism), but I believe a lot of these arguments go through even if you aren't, because the amount of value posited by impartialist hedonist utilitarianism tends to trump other considerations, among other things. Just in case you're not up to speed with moral philosophy, let me give a broad background overview on the moral philosophy. If you're well acquainted, feel free to skip to the next section. This theory breaks down into the following elements: By "pain" or "suffering" I mean "any negatively valenced experience", that is, any experience judged to be negative by the subject undergoing it. By "pleasure" or "happiness" I mean "any positively valenced ex...
    Show more Show less
    42 mins

What listeners say about The Nonlinear Library

Average customer ratings

Reviews - Please select the tabs below to change the source of reviews.