Rage of the Dog Star

  • About
  • ask me anything
  • rss
  • archive
  • (warning: philosophical noodling ahead)

    So the other day I had a minor medical procedure that required going under anesthesia. The procedure only took about 10 minutes, so they used a type of anesthesia that works very quickly and wears off very quickly.

    After the procedure was over, I regained consciousness in THE MIDDLE of a conversation with the doctor, holding a printout of my results (turns out I am fine). I had, apparently, been talking with him for the last five minutes about the results, what medications I should continue to take, how nice everyone in the office was, etc. I have NO memory of this whatsoever. My mom, who had driven me there and was sitting by my bed while this happened, told me that while I had been perfectly coherent, I was acting very weird (talking loud and gesticulating), and she wondered why the doctor was completely unfazed by it.

    What I’ve been wondering is this - was I unconscious during this conversation, and just sort of sleep-talking? Or was I fully conscious and it just instantly faded from memory? How would I know the difference?

    I never thought about the relationship between consciousness and memory before, but now I can’t help but think they’re inextricably bound together. Also, it’s a shame people studying philosophy of mind can’t just do experiments with anesthesia all day.

    • 5 hours ago
    • 19 notes
    • #about me
  • xhxhxhx:

    rageofthedogstar:

    xhxhxhx:

    nostalgebraist:

    Commenter Jaskologist at SSC mentioned the “Murray Gell-Mann Amnesia Effect,” which is something I had heard about before but should really keep in mind as a thing with a name.

    Coined by Michael Crichton (…yes), it refers to the fact that a lot of people will notice giant errors in the newspaper when it happens to be talking about something they know a lot about, but will forget about this and trust the news on all other topics.

    I’m not sure how true this is in the case of the news media (though I’m sure there’s something to it), but I do think there’s a lot of value in the general principle of “pay attention to what people say when they’re talking about things you think you know a lot about.”

    This can be a force for trust as well as mistrust.  It’s one of the reasons I’ve generally tended to trust Cosma Shalizi – when I first started reading him, I noticed that he seemed accurate and perceptive whenever he was talking about something I knew about, yet also talked about many things I didn’t know about.  And on the other hand I’ve also encountered people who seemed very smart and authoritative, but kept making strangely egregious mistakes whenever they strayed into something I knew about – in which case the Gell-Mann Amnesia Effect should have immediately made me suspicious.

    Yes, but … I think it’s a mistake to carry that insight too far. 

    Bad methods can taint an enterprise. If someone is not rational, they will put out a lot of garbage. If the evidence suggests to you that the New York Times does not assign reliable and knowledgeable reporters, ignores evidence, or does not reason correctly from the evidence it does have, then you should trust its claims much less. 

    And you can’t really know whether the Times is or is not rational until they apply their methods to a domain you know well. That’s Murray Gell-Man: If you can detect irrationality in a source, you should trust it less. 

    But I don’t think you can reverse that rule. I don’t think “If a source is reliable in a domain you know well, you should trust it in other domains,” is obviously true.

    Reliability in one domain doesn’t strongly predict reliability in another. (su3su2u1) That’s the reality of expertise: people won’t know much about things outside their area of expertise. 

    If reliability is some function of intelligence, knowledge, rationality, and effort, then any individual is going to run up against severe resource bottlenecks – you never have enough time and energy to be right about everything – and so you won’t be able to acquire more than a narrow, specialist body of knowledge. 

    So here’s what I’d say: I’d trust the Times less, but I wouldn’t trust Cosma Shalizi more.

    I’d have to check against domain-specific methods, foundations, or expert consensus before I could have any confidence in Shalzi’s reliability.

    I’m not sure whether that makes sense – if slatestarscratchpad is right, it doesn’t – but that’s my heuristic.

    Doesn’t this violate a basic rule of probability? If irrationality in a domain you know well would make you trust a source less, doesn’t rationality in a domain you know well FORCE you to trust it more (since there’s now fewer ways that it could be wrong)?

    Sure, the sign will be positive, but the effect will be small. Too small to consider (heuristically).

    So there’s A, which our claim, and B, which is a method. If we know our source has not applied B, then it is no evidence for A. 

    But B does not exhaust the set of methods, and ~B does not exhaust the possible errors. There’s C, D, E, … different component methods, and so uncountable different errors ~C, ~D, ~E.

    (I cannot assess all the ways something might be wrong; and I cannot confidently assess the reliability of claims outside a known expertise domain, because there are errors which I might not be aware of.)

    uhhh does this check out (it probably doesn’t)

    explain probability to me nonternary and rageofthedogstar I am real dumb

    (I don’t really know much probability, but here’s my stab anyway)

    If B reliably produces errors, and we know they didn’t use B (because they avoided a certain type of error), there’s now the same number of ways they could have been right, but fewer ways they could have been wrong. So if there’s methods B, C, D, E, and F, and only F produces the correct answer, I still don’t know if they used F or not. But F / (C + D + E + F) is higher than F / (B + C + D + E + F).

    As to whether this is too small to consider, I’m actually less sure? In general it seems pretty hard for a non-expert to be able to avoid expert-level errors, and so if someone does that seems like good evidence that their methods are generally useful. If, say, the New York Times is able to write an accurate article about bridge design (something I know about), that’s a reason to trust their articles about economics, even though I wouldn’t trust a random engineer’s articles about economics just because he also knew about bridge design. Non-expert methods of producing accurate claims seem like they probably generalize pretty well?

    (I should probably go back and read that whole discussion about the correct contrarian cluster now I guess, ugh).

    Source: nostalgebraist
    • 3 days ago
    • 28 notes
  • xhxhxhx:

    nostalgebraist:

    Commenter Jaskologist at SSC mentioned the “Murray Gell-Mann Amnesia Effect,” which is something I had heard about before but should really keep in mind as a thing with a name.

    Coined by Michael Crichton (…yes), it refers to the fact that a lot of people will notice giant errors in the newspaper when it happens to be talking about something they know a lot about, but will forget about this and trust the news on all other topics.

    I’m not sure how true this is in the case of the news media (though I’m sure there’s something to it), but I do think there’s a lot of value in the general principle of “pay attention to what people say when they’re talking about things you think you know a lot about.”

    This can be a force for trust as well as mistrust.  It’s one of the reasons I’ve generally tended to trust Cosma Shalizi – when I first started reading him, I noticed that he seemed accurate and perceptive whenever he was talking about something I knew about, yet also talked about many things I didn’t know about.  And on the other hand I’ve also encountered people who seemed very smart and authoritative, but kept making strangely egregious mistakes whenever they strayed into something I knew about – in which case the Gell-Mann Amnesia Effect should have immediately made me suspicious.

    Yes, but … I think it’s a mistake to carry that insight too far. 

    Bad methods can taint an enterprise. If someone is not rational, they will put out a lot of garbage. If the evidence suggests to you that the New York Times does not assign reliable and knowledgeable reporters, ignores evidence, or does not reason correctly from the evidence it does have, then you should trust its claims much less. 

    And you can’t really know whether the Times is or is not rational until they apply their methods to a domain you know well. That’s Murray Gell-Man: If you can detect irrationality in a source, you should trust it less. 

    But I don’t think you can reverse that rule. I don’t think “If a source is reliable in a domain you know well, you should trust it in other domains,” is obviously true.

    Reliability in one domain doesn’t strongly predict reliability in another. (su3su2u1) That’s the reality of expertise: people won’t know much about things outside their area of expertise. 

    If reliability is some function of intelligence, knowledge, rationality, and effort, then any individual is going to run up against severe resource bottlenecks – you never have enough time and energy to be right about everything – and so you won’t be able to acquire more than a narrow, specialist body of knowledge. 

    So here’s what I’d say: I’d trust the Times less, but I wouldn’t trust Cosma Shalizi more.

    I’d have to check against domain-specific methods, foundations, or expert consensus before I could have any confidence in Shalzi’s reliability.

    I’m not sure whether that makes sense – if slatestarscratchpad is right, it doesn’t – but that’s my heuristic.

    Doesn’t this violate a basic rule of probability? If irrationality in a domain you know well would make you trust a source less, doesn’t rationality in a domain you know well FORCE you to trust it more (since there’s now fewer ways that it could be wrong)?

    Source: nostalgebraist
    • 3 days ago
    • 28 notes
  • So here’s another equilibrium it seems like online dating falls into.

    Keep reading

    • 4 days ago
    • 1 notes
    • #game theory blah blah blah
    • #rage of the datestar
  • It’s sort of a catch 22 sharing an opinion “I am bored by X”, since that opens you up to the exact boring conversation that you don’t want to have.

    So my first and last word is that there are some topics that I’m so viscerally uninterested in that I’d prefer looking at a blank screen to reading about them, and these endless Yudkowsky discussions fall into that category.

    • 4 days ago
  • A bit more on Better Angels.

    So based on this analysis from Holden of GiveWell, and some of the critical reviews I read earlier (including this very Taleby one by Taleb that I’m not quite sure I follow), it looks like the modern rate of death by war is as high/higher than it’s ever been (fewer wars which kill a lot more people), and that this offsets a lot of the decrease in other sorts of violence.

    This makes a certain amount of sense given my intutions about the scale of destruction of modern war, the relatively short timeframe of the Long Peace (if you’re looking at the entire arc of human history, it seems off to call a 50-year chunk a “trend”), and Pinker’s sort of unconvincing explanations of it (large wars are rooted in ideologies such as communism or fascism which have gone out of style, whatever that means).

    Seems like this bumps up the importance of trying really hard to avoid war if at all possible.

    • 6 days ago
    • 1 notes
    • #better angels
  • yudkowsky on free will

    ogingat:

    I want to start by noting that it has been hard to figure out exactly what view on free will the Sequences put forth, despite the fact that it is “one of the easiest” philosophical dilemmas, and is “fully and completely dissolved on LessWrong”; in fact, it is “about as easy as a philosophical problem in reductionism can get”, and is “a good exercise for aspiring reductionists”. I first looked in “Dissolving the Question”, but that was just a bunch of (ill-informed, in my view) blather about what philosophers tend to do. Then I looked in “Wrong Questions”, but there Yudkowsky seems to have assumed away the problem. I say this not just to express my frustration at navigating the site and my disapproval of offhand comments Yudkowsky makes but to demonstrate that I am reading things by Yudkowsky, which apparently was in doubt in some circles and puddles.

    Let me start, then, by explaining the traditional class of problems of free will the way I learned them, and you can judge for yourself what sort of job the Sequences do, and whether they are right to make the sorts of sweeping statements in which they persistently indulge.

    Keep reading

    Ok, so, in the example you give on whether it makes sense to feel guilty, can you explain the penchant for philosophers to try to answer hard questions by appealing to other intuitions that don’t seem like they would hold up on examination? I don’t think my intuitions about what it means to feel guilty would hold up if I poked at them very hard, and it doesn’t seem especially revealing to use them in trying to figure out free will.

    The open question argument is another argument like this - the fact that a question always feels like it’s meaningful just seems like it means I have bad intuitions about what “meaningful” means. That instead of giving me any sort of clarity, you’ve just given me another thing I don’t really understand.

    Which would be fine if that was like a starting point to explore these bundles of intuitions and how they work, but instead it seems more like something being an open question, or the fact that I feel guilty, is just a Fact that any counterargument must deal with.

    (I’m not particularly well-read in philosopy, so apologies if I’m misinterpreting something)

    Source: ogingat
    • 1 week ago
    • 53 notes
  • The arguments are about arguments now. The discussion has meta-stasized.

    • 1 week ago
    • 2 notes
    • #shitposting
  • (Wonky game-theoretical look at online dating below the cut)

    Keep reading

    • 1 week ago
    • 5 notes
    • #game theory blah blah blah
    • #rage of the datestar
  • I’ve been using some dating apps lately, and been wondering about the matching markets aspects of them.

    (note: I’m writing about online dating as it applies to monogamous heterosexual people, since that’s my experience, but I’m super interested in hearing how it works for gay or polyamorous people if you happen to know)

    Traditional online dating sites work by being able to see everyone’s profile, and being able to send a message to anyone you’re interested in. This seems to often fall into a particular state where men send messages to every woman they find acceptable, and women sort through all the messages and reply to the ones they find acceptable. This is a bad outcome for a lot of reasons:

    • It forces men to send a LOT of messages - you sort of have to assume that any given message has maybe a 1/20 chance of getting a response (maybe other men have better luck, but that’s roughly been my rate)

    • It involves a lot of unnecessary emotional turmoil as men hope for responses for women that might not be interested in them at all.

    • Because sending so many messages is required, the quality often suffers, and women complain about receiving a million messages of the form “hey gurl”.

    • more generally, women are forced to do A LOT of filtering, since they’re getting so many messages (women’s profiles often have a lot of rules of the form “if you confuse your and you’re don’t message me", “if you’re drinking in every picture don’t message me”, etc.)

    • Because men send messages to EVERY woman they’re interested in, a woman will often respond to a man who then doesn’t reply back, presumably because he’s already connected with a woman he likes better. This leads to a lot of frustration on the part of women.

    Newer dating apps though, seem to work differently. They tend to work the following way: men and women are given a series of profiles of the opposite sex that they can “like” or “pass” on, and ONLY when two people both like each other can messages be sent. It seems like the traditional dating sites are moving to add this sort of functionality as well. My experience so far is that this way works WAY better.

    My intuition is that the first situation persists because it’s an equilibrium even though it results in a lot of extra effort, and that the newer mechanism PROBABLY results in better pairings, but I’m not sure. Reading a little bit about matching markets to see if I can understand it any better.

    • 1 week ago
    • 2 notes
© 2015 Rage of the Dog Star
Next page
  • Page 1 / 9