Quoting Lorin Hochstein

It’s useful to compare Lewis’s book with two other recent ones about Silicon Valley executives: John Carreyrou’s Bad Blood and Sarah Wynn-Williams Careless People. Both books focus on the immorality of Silicon Valley executives (Elizabeth Holmes of Theranos in the first book, Mark Zuckerberg, Sheryl Sandberg, and Joel Kaplan of Facebook in the second). These are tales of ambition, hubris, and utter indifference to the human suffering left in their wake. Now, you could tell a similar story about Bankman-Fried. In fact, this is what Zeke Faux did in his book Number Go Up. but that’s not the story that Lewis told. Instead, Lewis told a very different kind of story. His book is more of a character study of a person with an extremely idiosyncratic view of risk. The story Lewis told about Bankman-Fried wasn’t the story that people wanted to hear. They wanted another Bad Blood, and that’s not the book he ended up writing. As a consequencee, he told the wrong story.

Telling the wrong story is a particular risk when it comes to explaining a public large-scale incidents. We’re inclined to believe that a big incident can only happen because of a big screw-up: that somebody must have done something wrong for that incident to happen. If, on the other hand, you tell a story about how the incident happened despite nobody doing anything wrong, then you are in essence telling an unbelievable story. And, by definition, people don’t believe unbelievable stories. 

From Telling the wrong story on Lorin’s excellent blog, Surfing Complexity.

Quoting Nicholas Carlini

Because when the people training these models justify why they’re worth it, they appeal to pretty extreme outcomes. When Dario Amodei wrote his essay Machines of Loving Grace, he wrote that he sees the benefits as being extraordinary: “Reliable prevention and treatment of nearly all natural infectious disease … Elimination of most cancer … Prevention of Alzheimer’s … Improved treatment of most other ailments … Doubling of the human lifespan.” These are the benefits that the CEO of Anthropic uses to justify his belief that LLMs are worth it. If you think that these risks sound fanciful, then I might encourage you to consider what benefits you see LLMs as bringing, and then consider if you think the risks are worth it.

From Carlini’s recent talk/article on Are large language models worth it?

The entire article is well worth reading, but I was struck by this bit near the end. LLM researchers often dismiss (some of) the risks of these models as fanciful. But many of the benefits touted by the labs sound just as fanciful!

When we’re evaluating the worth of this research, it’s a good idea to be consistent about how realistic — or how “galaxy brain” — you want to be, with both risks and benefits.

why I continue to blog with WordPress

I’m generally a fan in theory of static site generators like Hugo or Jekyll. I like the idea of a blog which is just a set of html pages, especially since I don’t like to enable blog comments. (Mostly for time management reasons.)

However… in my current life situation, I often find myself most able to write or edit blog entries from my phone. Usually on the couch, at the dog park, or otherwise away from a real keyboard. And I just can’t bring myself to manage a static site generator on a mobile device most of the time.

At the same time, I strongly prefer to have my blog hosted on a service instance I own and on a system I control. Or at least that I can control. I can deal with shared hosting, eg DreamHost-style, but I refuse to host my blog on a box where I can’t get a shell.

WordPress, for all its faults, has a usable mobile app that can connect to an instance hosted on my provider of choice. So it’s still the choice for the moment.

tailscale

Some discussion on bsky of the usefulness of Tailscale, and I’ll just note here how very handy it is for running a personal homelab that includes cloud instances. As well as just having lab connectivity from a laptop or phone on the go!

Services I run over Tailscale, just for myself, include:

  • An RSS feed reader
  • A personal git forge
  • An IRC bouncer
  • A (poorly maintained) wiki
  • JupyterLab
  • Open WebUI for playing with local LLMs on a GPU workstation
  • SSH to a powerful workstation, hosted at home but without complex configs

And probably a few things I’ve forgotten! It’s really just very neat. Sure I could do it all with manual Wireguard configs. But Tailscale just makes the underlying primitive much more ergonomic.

baby monitor

As all paranoid parents are wont to do, my partner and I have set up a video baby monitor… to watch the puppy.

At 3 months old, we mostly use it to answer the questions “is she still asleep? can we still rest? dear God please let her still be asleep”. Which, to be fair, may not be too different for human children.

To her credit, Maddie has made pretty good progress at the whole sleeping thing. She typically only wakes us up once overnight, around 1 or 2 am depending, and is napping at a schedule of approx 2 hours awake to 1 hour asleep during the day. Which seems to be appropriate for her developmental level but is still nice to see.

Lowering the barrier to posting

Inspired somewhat by Simon Willison’s blog, I’m intentionally trying to lower my personal activation energy for posting to this blog. Hence the more frequent link and quote posts over the past few weeks!

This is both to try and make sure more of my thoughts are captured on a site I control, rather than just ending up on one social media platform or another. And to encourage myself to just post more frequently and share my thoughts.

I’m also trying to remind myself to link to these entries on BlueSky, the Fediverse, and even LinkedIn. But also, if you want to follow my posts, the most reliable method is the RSS feed as the Internet intended!

Quoting antirez on AI

Anyway, back to programming. I have a single suggestion for you, my friend. Whatever you believe about what the Right Thing should be, you can't control it by refusing what is happening right now. Skipping AI is not going to help you or your career. Think about it. Test these new tools, with care, with weeks of work, not in a five minutes test where you can just reinforce your own beliefs. Find a way to multiply yourself, and if it does not work for you, try again every few months.

Yes, maybe you think that you worked so hard to learn coding, and now machines are doing it for you. But what was the fire inside you, when you coded till night to see your project working? It was building. And now you can build more and better, if you find your way to use AI effectively. The fun is still there, untouched

From Don’t fall into the anti-AI hype

Benny enjoying his walk

The puppy has gotten featured a little more frequently lately, but it’s still important to highlight our big dude here.