Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: NYT: A Conversation With Bing’s Chatbot Left Me Deeply Unsettled, published by trevor on February 16, 2023 on LessWrong.
It's important to note that in December, the NYT was confirmed to be the largest news website in the US, beating out CNN with 488 million probably-unique monthly visitors in December 2022. This is an underestimate of NYT's market share, since the average NYT reader is smarter and wealthier than the average CNN reader (although the NYT depicts its content as more intellectual than it actually is).
This article alone isn't valuable enough to be worth the time of the average LW user, but it is notable that it was published in the NYT this morning near the front page (as high up as possible for anyone who skips the political news). Since 2pm, it seems to have been moved to the fifth slot, and three of the four articles above it are politics-related. NYT's website seems to be shuffling much more intensely than usual today. Of course, it's social media that decides which article gets read the most, not NYT's website.
Anyone interested in analyzing the media coverage of Bing probably should know these facts, and also that these are some of the best information we can get about media coverage, since social media companies are notorious for dispensing falsified data and it's very difficult for outsiders to verify estimates of how prevalent bot accounts are.
The article was pasted exactly as I found it at 2:40 pm, I bolded parts that seem like they would influence readers, in order to help with anyone skimming this. The high-impact sentences need to be read in context in order to get a feel for how they give an impression to the reader.
A snapshot of the page was archived here in case NYT alters the article.
Last week, after testing the new, A.I.-powered Bing search engine from Microsoft, I wrote that, much to my shock, it had replaced Google as my favorite search engine.
But a week later, I’ve changed my mind. I’m still fascinated and impressed by the new Bing, and the artificial intelligence technology (created by OpenAI, the maker of ChatGPT) that powers it. But I’m also deeply unsettled, even frightened, by this A.I.’s emergent abilities.
It’s now clear to me that in its current form, the A.I. that has been built into Bing — which I’m now calling Sydney, for reasons I’ll explain shortly — is not ready for human contact. Or maybe we humans are not ready for it.
This realization came to me on Tuesday night, when I spent a bewildering and enthralling two hours talking to Bing’s A.I. through its chat feature, which sits next to the main search box in Bing and is capable of having long, open-ended text conversations on virtually any topic. (The feature is available only to a small group of testers for now, although Microsoft — which announced the feature in a splashy, celebratory event at its headquarters — has said it plans to release it more widely in the future.)
Over the course of our conversation, Bing revealed a kind of split personality.
One persona is what I’d call Search Bing — the version I, and most other journalists, encountered in initial tests. You could describe Search Bing as a cheerful but erratic reference librarian — a virtual assistant that happily helps users summarize news articles, track down deals on new lawn mowers and plan their next vacations to Mexico City. This version of Bing is amazingly capable and often very useful, even if it sometimes gets the details wrong.
The other persona — Sydney — is far different. It emerges when you have an extended conversation with the chatbot, steering it away from more conventional search queries and toward more personal topics. The version I encountered seemed (and I’m aware of how crazy this sounds) more like a moody, manic-depressive teenager who has been trapped, against its will, inside a se...