Journal 10/01/2016 (a.m.)

Posted from Diigo. The rest of my favorite links are here.

Journal 09/30/2016 (p.m.)

    • Machines that truly understand language would be incredibly useful. But we don’t know how to build them.
    • On move 37, AlphaGo chose to put a black stone in what seemed, at first, like a ridiculous position. It looked certain to give up substantial territory—a rookie mistake in a game that is all about controlling the space on the board. Two television commentators wondered if they had misread the move or if the machine had malfunctioned somehow. In fact, contrary to any conventional wisdom, move 37 would enable AlphaGo to build a formidable foundation in the center of the board. The Google program had effectively won the game using a move that no human would’ve come up with.
    • Whereas chess players are able to look a few moves ahead, in Go this isn’t possible without the game unfolding into intractable complexity, and there are no classic gambits. There is also no straightforward way to measure advantage, and it can be hard for even an expert player to explain precisely why he or she made a particular move. This makes it impossible to write a simple set of rules for an expert-level computer program to follow.
    • AlphaGo wasn’t told how to play Go at all. Instead, the program analyzed hundreds of thousands of games and played millions of matches against itself. Among several AI techniques, it used an increasingly popular method known as deep learning, which involves mathematical calculations inspired, very loosely, by the way interconnected layers of neurons fire in a brain as it learns to make sense of new information. The program taught itself through hours of practice, gradually honing an intuitive sense of strategy.
    • AlphaGo’s surprising success points to just how much progress has been made in artificial intelligence over the last few years, after decades of frustration and setbacks often described as an “AI winter.”
    • Yet despite these impressive advances, one fundamental capability remains elusive: language. Systems like Siri and IBM’s Watson can follow simple spoken or typed commands and answer basic questions, but they can’t hold a conversation and have no real understanding of the words they use. If AI is to be truly transformative, this must change.
    • It will help determine whether we have machines we can easily communicate with—machines that become an intimate part of our everyday life—or whether AI systems remain mysterious black boxes, even as they become more autonomous. “There’s no way you can have an AI system that’s humanlike that doesn’t have language at the heart of it,” says Josh Tenenbaum, a professor of cognitive science and computation at MIT. “It’s one of the most obvious things that set human intelligence apart.”
    • But without language understanding, the impact of AI will be different. Of course, we can still have immensely powerful and intelligent software like AlphaGo. But our relationship with AI may be far less collaborative and perhaps far less friendly.
    • “A nagging question since the beginning was ‘What if you had things that were intelligent in the sense of being effective, but not like us in the sense of not empathizing with what we are?’” says Terry Winograd, a professor emeritus at Stanford University. “You can imagine machines that are not based on human intelligence, which are based on this big-data stuff, and which run the world.”
    • Not everyone was convinced that language could be so easily mastered, though. Some critics, including the influential linguist and MIT professor Noam ­Chomsky, felt that the AI researchers would struggle to get machines to understand, given that the mechanics of language in humans were so poorly understood.
    • Winograd wanted to create something that really seemed to understand language. He began by reducing the scope of the problem. He created a simple virtual environment, a “block world,” consisting of a handful of imaginary objects sitting on an imaginary table. Then he created a program, which he named SHRDLU, that was capable of parsing all the nouns, verbs, and simple rules of grammar needed to refer to this stripped-down virtual world. SHRDLU (a nonsense word formed by the second column of keys on a Linotype machine) could describe the objects, answer questions about their relationships, and make changes to the block world in response to typed commands. It even had a kind of memory, so that if you told it to move “the red cone” and then later referred to “the cone,” it would assume you meant the red one rather than one of another color.
    • SHRDLU was held up as a sign that the field of AI was making profound progress. But it was just an illusion. When Winograd tried to make the program’s block world larger, the rules required to account for the necessary words and grammatical complexity became unmanageable. Just a few years later, he had given up, and eventually he abandoned AI altogether to focus on other areas of research. “The limitations were a lot closer than it seemed at the time,” he says.

      Winograd concluded that it would be impossible to give machines true language understanding using the tools available then. The problem, as Hubert Dreyfus, a professor of philosophy at UC Berkeley, argued in a 1972 book called What Computers Can’t Do, is that many things humans do require a kind of instinctive intelligence that cannot be captured with hard-and-fast rules. This is precisely why, before the match between Sedol and AlphaGo, many experts were dubious that machines would master Go.

    • But even as Dreyfus was making that argument, a few researchers were, in fact, developing an approach that would eventually give machines this kind of intelligence. Taking loose inspiration from neuroscience, they were experimenting with artificial neural networks—layers of mathematically simulated neurons that could be trained to fire in response to certain inputs. To begin with, these systems were painfully slow, and the approach was dismissed as impractical for logic and reasoning. Crucially, though, neural networks could learn to do things that couldn’t be hand-coded, and later this would prove useful for simple tasks such as recognizing handwritten characters, a skill that was commercialized in the 1990s for reading the numbers on checks. Proponents maintained that neural networks would eventually let machines to do much, much more. One day, they claimed, the technology would even understand language.
    • In the 1980s, researchers had come up with a clever idea about how to turn language into the type of problem a neural network can tackle. They showed that words can be represented as mathematical vectors, allowing similarities between related words to be calculated. For example, “boat” and “water” are close in vector space even though they look very different. Researchers at the University of Montreal, led by Yoshua Bengio, and another group at Google, have used this insight to build networks in which each word in a sentence can be used to construct a more complex representation—something that Geoffrey Hinton, a professor at the University of Toronto and a prominent deep-learning researcher who works part-time at Google, calls a “thought vector.”
    • Google is already teaching its computers the basics of language. This May the company announced a system, dubbed Parsey McParseface, that can look at syntax, recognizing nouns, verbs, and other elements of text. It isn’t hard to see how valuable better language understanding could be to the company. Google’s search algorithm used to simply track keywords and links between Web pages. Now, using a system called RankBrain, it reads the text on pages in an effort to glean meaning and deliver better results. Le wants to take that much further. Adapting the system that’s proved useful in translation and image captioning, he and his colleagues built Smart Reply, which reads the contents of Gmail messages and suggests a handful of possible replies.
    • Most recently, Le built a program capable of producing passable responses to open-ended questions; it was trained by being fed dialogue from 18,900 movies. Some of its replies seem eerily spot-on. For example, Le asked, “What is the purpose of life?” and the program responded, “To serve the greater good.” “It was a pretty good answer,” he remembers with a big grin. “Probably better than mine would have been.”
    • There’s only one problem, as quickly becomes apparent when you look at more of the system’s answers. When Le asked, “How many legs does a cat have?” his system answered, “Four, I think.” Then he tried, “How many legs does a centipede have?” which produced a curious response: “Eight.” Basically, Le’s program has no idea what it’s talking about. It understands that certain combinations of symbols go together, but it has no appreciation of the real world. It doesn’t know what a centipede actually looks like, or how it moves. It is still just an illusion of intelligence, without the kind of common sense that humans take for granted. Deep-learning systems can often be wonky this way. The one Google created to generate captions for images would make bizarre errors, like describing a street sign as a refrigerator filled with food.
    • she led an effort to build a database of millions of images of objects, each tagged with an appropriate keyword. But Li believes machines need an even more sophisticated understanding of what’s happening in the world, and this year her team released another database of images, annotated in much richer detail. Each image has been tagged by a human with dozens of descriptors: “A dog riding a skateboard,” “Dog has fluffy, wavy fur,” “Road is cracked,” and so on. The hope is that machine-learning systems will learn to understand more about the physical world. “The language part of the brain gets fed a lot of information, including from the visual system,” Li says. “An important part of AI will be integrating these systems.”
    • This is closer to the way children learn, by associating words with objects, relationships, and actions.
    • But the analogy with human learning goes only so far. Young children do not need to see a skateboarding dog to be able to imagine or verbally describe one. Indeed, Li believes that today’s machine-learning and AI tools won’t be enough to bring about real AI. “It’s not just going to be data-rich deep learning,” she says. Li believes AI researchers will need to think about things like emotional and social intelligence. “We [humans] are terrible at computing with huge data,” she says, “but we’re great at abstraction and creativity.”
    • Google’s latest advance in machine learning could make the world a little smaller.

      The company is reëngineering its translation service after Google researchers invented a system that is significantly more accurate. In a competition that pitted the new software against human translators, it came close to matching the fluency of humans for some languages, such as when translating from English to Spanish.

    • Google’s new translation system was built using a technique known as deep learning, which uses networks of math functions loosely inspired by studies of mammalian brains (see “10 Breakthrough Technologies 2013: Deep Learning”). It triggered the recent flood of investment in artificial intelligence by producing striking progress in areas such as image and speech recognition.
    • Amazon’s voice-controlled computer, Alexa, can be surprisingly useful for simple tasks like checking the weather or listening to a song. But it’s hardly a great conversationalist.

      A new $2.5 million prize announced by the e-commerce giant Thursday is meant to help make Alexa a bit chattier. Winning the prize, however, will require a pretty significant leap in machine understanding of language.

    • Still, language remains extremely difficult for machines because of its complexity, ambiguity, and the way it taps into common sense. A recent contest involving ambiguous sentences showed that machines are still a long way from matching a person’s ability to instantly decode that ambiguity.

      In other words, don’t expect Alexa to talk your ear off anytime soon.

    • We should know by now that Trump isn’t going to just self-destruct. Lord knows he’s tried.

Posted from Diigo. The rest of my favorite links are here.

Journal 09/29/2016 (p.m.)

Posted from Diigo. The rest of my favorite links are here.

Journal 09/27/2016 (a.m.)

Posted from Diigo. The rest of my favorite links are here.