The Society of Mind
The Society of Mind book cover

The Society of Mind

Paperback – Box set, March 15, 1988

Price
$22.00
Format
Paperback
Pages
336
Publisher
Simon & Schuster
Publication Date
ISBN-13
978-0671657130
Dimensions
8.5 x 0.7 x 11 inches
Weight
1.68 pounds

Description

For some artificial intelligence researchers, Minsky's book is too far removed from hard science to be useful. For others, the high-level approach of The Society of Mind makes it a gold mine of ideas waiting to be implemented. The author, one of the undisputed fathers of the discipline of AI, sets out to provide an abstract model of how the human mind really works. His thesis is that our minds consist of a huge aggregation of tiny mini-minds or agents that have evolved to perform highly specific tasks. Most of these agents lack the attributes we think of as intelligence and are severely limited in their ability to intercommunicate. Yet rational thought, feeling, and purposeful action result from the interaction of these basic components. Minsky's theory does not suggest a specific implementation for building intelligent machines. Still, this book may prove to be one of the most influential for the future of AI. Issac Asimov Information Week 270 brilliantly original essays on...how the mind works.Douglas Hofstadter author of Gödel, Escher, Bach and Metamagical Themas A stunning collage of staccato images, filled to the brim with witty insights and telling aphorisms. The New York Times Book Review INGENIOUS...STIMULATING...crisp, packed with quips, aphorisms and homely illustrations. A pleasure to read...It will make you think. And that's what brains are for.Michael Crichton author to The Andromeda Strain PROVOCATIVE, DELIGHTFUL, CHALLENGING, a rich, funny and altogether fascinating book.Martin Gardner The Boston Sunday Globe SPARKLING WITH JOKES and apt quotations...and rich insights.Gene Roddenberry creator of Star Trek A REMARKABLE BOOK....I am grateful that Marvin Minsky was my tour guide on this journey in the realms of my own consciousness. San Jose Mercury News SCATTERED WITH GEMS....Liable to be influential far beyond the narrow researches of artificial intelligence.Professor Guy Cellerier Genetic Artificial Intelligence and Epistemics Laboratory, University of Geneva A PROFOUND AND FASCINATING BOOK that lays down the foundations for the solution of one of the last great problems of modern science....Marks a new era. Marvin Minsky is Toshiba Professor of Media Arts and Sciences, and Professor of Electrical Engineering and Computer Science at the Massachusetts Institute of Technology. His research has led to many advances in artificial intelligence, psychology, physical optics, mathematics, and the theory of computation. He has made major contributions in the domains of computer graphics, knowledge and semantics, machine vision, and machine learning. He has also been involved with technologies for space exploration.Professor Minsky is one of the pioneers of intelligence-based robotics. He designed and built some of the first mechanical hands with tactile sensors, visual scanners, and their software and interfaces. In 1951 he built the first neural-network learning machine. With John McCarthy he founded the MIT Artificial Intelligence Laboratory in 1959. He has written seminal papers in the fields of artificial intelligence, perception, and language. His book The Society of Mind contains hundreds of ideas about the mind, many of which he has further developed in this book. Excerpt. © Reprinted by permission. All rights reserved. CHAPTER 1 PROLOGUE Everything should be made as simple as possible, but not simpler. Albert EinsteinThis book tries to explain how minds work. How can intelligence emerge from nonintelligence? To answer that, we'll show that you can build a mind from many little parts, each mindless by itself.I'll call "Society of Mind" this scheme in which each mind is made of many smaller processes. These we'll call agents. Each mental agent by itself can only do some simple thing that needs no mind or thought at all. Yet when we join these agents in societies -- in certain very special ways -- this leads to true intelligence.There's nothing very technical in this book. It, too, is a society -- of many small ideas. Each by itself is only common sense, yet when we join enough of them we can explain the strangest mysteries of mind.One trouble is that these ideas have lots of cross-connections. My explanations rarely go in neat, straight lines from start to end. I wish I could have lined them up so that you could climb straight to the top, by mental stair-steps, one by one. Instead they're tied in tangled webs.Perhaps the fault is actually mine, for failing to find a tidy base of neatly ordered principles. But I'm inclined to lay the blame upon the nature of the mind: much of its power seems to stem from just the messy ways its agents cross-connect. If so, that complication can't be helped; it's only what we must expect from evolution's countless tricks.What can we do when things are hard to describe? We start by sketching out the roughest shapes to serve as scaffolds for the rest; it doesn't matter very much if some of those forms turn out partially wrong. Next, draw details to give these skeletons more lifelike flesh. Last, in the final filling-in, discard whichever first ideas no longer fit.That's what we do in real life, with puzzles that seem very hard. It's much the same for shattered pots as for the cogs of great machines. Until you've seen some of the rest, you can't make sense of any part. 1.1 THE AGENTS OF THE MIND Good theories of the mind must span at least three different scales of time: slow, for the billion years in which our brains have evolved; fast, for the fleeting weeks and months of infancy and childhood; and in between, the centuries of growth of our ideas through history.To explain the mind, we have to show how minds are built from mindless stuff, from parts that are much smaller and simpler than anything we'd consider smart. Unless we can explain the mind in terms of things that have no thoughts or feelings of their own, we'll only have gone around in a circle. But what could those simpler particles be -- the "agents" that compose our minds? This is the subject of our book, and knowing this, let's see our task. There are many questions to answer. Function: How do agents work? Embodiment: What are they made of? Interaction: How do they communicate? Origins: Where do the first agents come from? Heredity: Are we all born with the same agents? Learning: How do we make new agents and change old ones? Character: What are the most important kinds of agents? Authority: What happens when agents disagree? Intention: How could such networks want or wish? Competence: How can groups of agents do what separate agents cannot do? Selfness: What gives them unity or personality? Meaning: How could they understand anything? Sensibility: How could they have feelings and emotions? Awareness: How could they be conscious or self-aware? How could a theory of the mind explain so many things, when every separate question seems too hard to answer by itself? These questions all seem difficult, indeed, when we sever each one's connections to the other ones. But once we see the mind as a society of agents, each answer will illuminate the rest. 1.2 THE MIND AND THE BRAIN It was never supposed [the poet Imlac said] that cogitation is inherent in matter, or that every particle is a thinking being. Yet if any part of matter be devoid of thought, what part can we suppose to think? Matter can differ from matter only in form, bulk, density, motion and direction of motion: to which of these, however varied or combined, can consciousness be annexed? To be round or square, to be solid or fluid, to be great or little, to be moved slowly, or swiftly one way or another, are modes of material existence, all equally alien from the nature of cogitation. If matter be once without thought, it can only be made to think by some new modification, but all the modifications which it can admit are equally unconnected with cogitative powers. Samuel JohnsonHow could solid-seeming brains support such ghostly things as thoughts? This question troubled many thinkers of the past. The world of thoughts and the world of things appeared to be too far apart to interact in any way. So long as thoughts seemed so utterly different from everything else, there seemed to be no place to start.A few centuries ago it seemed equally impossible to explain Life, because living things appeared to be so different from anything else. Plants seemed to grow from nothing. Animals could move and learn. Both could reproduce themselves -- while nothing else could do such things. But then that awesome gap began to close. Every living thing was found to be composed of smaller cells, and cells turned out to be composed of complex but comprehensible chemicals. Soon it was found that plants did not create any substance at all but simply extracted most of their material from gases in the air. Mysteriously pulsing hearts turned out to be no more than mechanical pumps, composed of networks of muscle cells. But it was not until the present century that John yon Neumann showed theoretically how cell-machines could reproduce while, almost independently, James Watson and Francis Crick discovered how each cell actually makes copies of its own hereditary code. No longer does an educated person have to seek any special, vital force to animate each living thing.Similarly, a century ago, we had essentially no way to start to explain how thinking works. Then psychologists like Sigmund Freud and Jean Piaget produced their theories about child development. Somewhat later, on the mechanical side, mathematicians like Kurt Gödel and Alan Turing began to reveal the hitherto unknown range of what machines could be made to do. These two streams of thought began to merge only in the 1940s, when Warren McCulloch and Walter Pitts began to show how machines might be made to see, reason, and remember. Research in the modern science of Artificial Intelligence started only in the 1950s, stimulated by the invention of modern computers. This inspired a flood of new ideas about how machines could do what only minds had done previously.Most people still believe that no machine could ever be conscious, or feel ambition, jealousy, humor, or have any other mental life-experience. To be sure, we are still far from being able to create machines that do all the things people do. But this only means that we need better theories about how thinking works. This book will show how the tiny machines that we'll call "agents of the mind" could be the long sought "particles" that those theories need. 1.3 THE SOCIETY OF MIND You know that everything you think and do is thought and done by you. But what's a "you"? What kinds of smaller entities cooperate inside your mind to do your work? To start to see how minds are like societies, try this: pick up a cup of tea!Your GRASPING agents want to keep hold of the cup.Your BALANCING agents want to keep the tea from spilling out.Your THIRST agents want you to drink the tea.Your MOVING agents want to get the cup to your lips. Yet none of these consume your mind as you roam about the room talking to your friends. You scarcely think at all about Balance; Balance has no concern with Grasp; Grasp has no interest in Thirst; and Thirst is not involved with your social problems. Why not? Because they can depend on one another. If each does its own little job, the really big job will get done by all of them together: drinking tea.How many processes are going on, to keep that teacup level in your grasp? There must be at least a hundred of them, just to shape your wrist and palm and hand. Another thousand muscle systems must work to manage all the moving bones and joints that make your body walk around. And to keep everything in balance, each of those processes has to communicate with some of the others. What if you stumble and start to fall? Then many other processes quickly try to get things straight. Some of them are concerned with how you lean and where you place your feet. Others are occupied with what to do about the tea: you wouldn't want to burn your own hand, but neither would you want to scald someone else. You need ways to make quick decisions.All this happens while you talk, and none of it appears to need much thought. But when you come to think of it, neither does your talk itself. What kinds of agents choose your words so that you can express the things you mean? How do those words get arranged into phrases and sentences, each connected to the next? What agencies inside your mind keep track of all the things you've said -- and, also, whom you've said them to? How foolish it can make you feel when you repeat -- unless you're sure your audience is new.We're always doing several things at once, like planning and walking and talking, and this all seems so natural that we take it for granted. But these processes actually involve more machinery than anyone can understand all at once. So, in the next few sections of this book, we'll focus on just one ordinary activity -- making things with children's building-blocks. First we'll break this process into smaller parts, and then we'll see how each of them relates to all the other parts.In doing this, we'll try to imitate how Galileo and Newton learned so much by studying the simplest kinds of pendulums and weights, mirrors and prisms. Our study of how to build with blocks will be like focusing a microscope on the simplest objects we can find, to open up a great and unexpected universe. It is the same reason why so many biologists today devote more attention to tiny germs and viruses than to magnificent lions and tigers. For me and a whole generation of students, the world of work with children's blocks has been the prism and the pendulum for studying intelligence. In science, one can learn the most by studying what seems the least. 1.4 THE WORLD OF BLOCKS Imagine a child playing with blocks, and imagine that this child's mind contains a host of smaller minds. Call them mental agents. Right now, an agent called Builder is in control. Builder 's specialty, is making towers from blocks.Our child likes to watch a tower grow as each new block is placed on top. But building a tower is too complicated a job for any single, simple agent, so Builder has to ask for help from several other agents:In fact, even to find another block and place it on the tower top is too big for a job for any single agent. So Add, in turn, must call for other agents' help. Before we're done, we'll need more agents than would fit in any diagram.Why break things into such small parts? Because minds, like towers, are made that way -- except that they're composed of processes instead of blocks. And if making stacks of blocks seems insignificant -- remember that you didn't always feel that way. When first you found some building toys in early childhood, you probably spent joyful weeks of learning what to do with them. If such toys now seem relatively dull, then you must ask yourself how you have changed. Before you turned to more ambitious things, it once seemed strange and wonderful to be able to build a tower or a house of blocks. Yet, though all grown-up persons know how to do such things, no one understands how we learn to do them! And that is what will concern us here. To pile up blocks into heaps and rows: these are skills each of us learned so long ago that we can't remember learning them at all. Now they seem mere common sense -- and that's what makes psychology hard. This forgetfulness, the amnesia of infancy, makes us assume that all our wonderful abilities were always there inside our minds, and we never stop to ask ourselves how they began and grew. 1.5 COMMON SENSE You cannot think about thinking, without thinking about thinking about something. Seymour PapertWe found a way to make our tower builder out of parts. But Builder is really far from done. To build a simple stack of blocks, our child's agents must accomplish all these other things. See must recognize its blocks, whatever their color, size, and place -- in spite of different backgrounds, shades, and lights, and even when they're partially obscured by other things.Then, once that's done, Move has to guide the arm and hand through complicated paths in space, yet never strike the tower's top or hit the child's face.And think how foolish it would seem, if Find were to see, and Grasp were to grasp, a block supporting the tower top! When we look closely at these requirements, we find a bewildering world of complicated questions. For example, how could Find determine which blocks are still available for use? It would have to "understand" the scene in terms of what it is trying to do. This means that we'll need theories both about what it means to understand and about how a machine could have a goal. Consider all the practical judgments that an actual Builder would have to make. It would have to decide whether there are enough blocks to accomplish its goal and whether they are strong and wide enough to support the others that will be placed on them.What if the tower starts to sway? A real builder must guess the cause. It is because some joint inside the column isn't square enough? Is the foundation insecure, or is the tower too tall for its width? Perhaps it is only because the last block was placed too roughly.All children learn about such things, but we rarely ever think about them in our later years. By the time we are adults we regard all of this to be simple "common sense." But that deceptive pair of words conceals almost countless different skills. Common sense is not a simple thing. Instead, it is an immense society of hard-earned practical ideas -- of multitudes of life-learned rules and exceptions, dispositions and tendencies, balances and checks. If common sense is so diverse and intricate, what makes it seem so obvious and natural? This illusion of simplicity comes from losing touch with what happened during infancy, when we formed our first abilities. As each new group of skills matures, we build more layers on top of them. As time goes on, the layers below become increasingly remote until, when we try to speak of them in later life, we find ourselves with little more to say than "I don't know." 1.6 AGENTS AND AGENCIES We want to explain intelligence as a combination of simpler things. This means that we must be sure to check, at every step, that none of our agents is, itself, intelligent. Otherwise, our theory would end up resembling the nineteenth-century "chessplaying machine" that was exposed by Edgar Allan Poe to actually conceal a human dwarf inside. Accordingly, whenever we find that an agent has to do anything complicated, we'll replace it with a subsociety of agents that do simpler things. Because of this, the reader must be prepared to feel a certain sense of loss. When we break things down to their smallest parts, they'll each seem dry as dust at first, as though some essence has been lost.For example, we've seen how to construct a tower-building skill by making Builder from little parts like Find and Get. Now, where does its "knowing-how-to-build" reside when, clearly, it is not in any part -- and yet those parts are all that Builder is? The answer: It is not enough to explain only what each separate agent does. We must also understand how those parts are interrelated -- that is, how groups of agents can accomplish things.Accordingly, each step in this book uses two different ways to think about agents. If you were to watch Builder work, from the outside, with no idea of how it works inside, you'd have the impression that it knows how to build towers. But if you could see Builder from the inside, you'd surely find no knowledge there. You would see nothing more than a few switches, arranged in various ways to turn each other on and off. Does Builder "really know" how to build towers? The answer depends on how you look at it. Let's use two different words, "agent" and "agency," to say why Builder seems to lead a double life. As agency, it seems to know its job. As agent, it cannot know anything at all.When you drive a car, you regard the steering wheel as an agency that you can use to change the car's direction. You don't care how it works. But when something goes wrong with the steering, and you want to understand what's happening, it's better to regard the steering wheel as just one agent in a larger agency: it turns a shaft that turns a gear to pull a rod that shifts the axle of a wheel. Of course, one doesn't always want to take this microscopic view; if you kept all those details in mind while driving, you might crash because it took too long to figure out which way to turn the wheel. Knowing how is not the same as knowing why. In this book, we'll always be switching between agents and agencies because, depending on our purposes, we'll have to use different viewpoints and kinds of descriptions.Copyright © 1985, 1986 by Marvin Minsky Read more

Features & Highlights

  • Marvin Minsky -- one of the fathers of computer science and cofounder of the Artificial Intelligence Laboratory at MIT -- gives a revolutionary answer to the age-old question: "How does the mind work?" Minsky brilliantly portrays the mind as a "society" of tiny components that are themselves mindless. Mirroring his theory, Minsky boldly casts
  • The Society of Mind
  • as an intellectual puzzle whose pieces are assembled along the way. Each chapter -- on a self-contained page -- corresponds to a piece in the puzzle. As the pages turn, a unified theory of the mind emerges, like a mosaic. Ingenious, amusing, and easy to read,
  • The Society of Mind
  • is an adventure in imagination.

Customer Reviews

Rating Breakdown

★★★★★
60%
(121)
★★★★
25%
(51)
★★★
15%
(30)
★★
7%
(14)
-7%
(-14)

Most Helpful Reviews

✓ Verified Purchase

Great Insight into possible mechanisms for the mind

This book is wonderfully readable! Each page is a chapter in itself describing possible mechanisms that cause and describe our being. If nothing else, this interesting read will make you think about thinking. Check it out!
9 people found this helpful
✓ Verified Purchase

Fantastic coffee table cook

This isn't so much a book as it is a collection of essays. Each essay brings up a different idea of how the human brain completes tasks, however small they are. Each thing we do (breathe, walk, talk) is carried out by small actors in our brain, and those actors are controlled by even smaller processes and yadda yadda. It's hard to explain what this book talks about, you have to really study it. I've only read it once through, and I probably need to read it again to fully understand the big concepts.

But what I like about it is it's easy to walk away from for a while and pick it back up again. The essays are each only a few pages, so you can just read one a week, dwell on it, then read the next essay. It's a strange book, I wish I knew of more like it.
6 people found this helpful
✓ Verified Purchase

Society of the Mind

It has been said that Isaac Asimov once described Marvin Minsky as one of only two people in the world who were more intelligent than he. The other person was Carl Sagan.

Minsky, a pioneer of Artificial Intelligence, is a fascinating writer. This is a book that combines philosophy, educational and language theories, poetry and science into an elegant package.

"The Society of the Mind" is unlike any other book I have read about thinking, learning, or the brain, and I like it. No appearance is made in this book by the amygdala, frontal lobe, cerebral cortex or thalamus. Minsky's view is that thinking is made of many smaller processes which he calls "agents."

Each mental agent by itself can only do some simple things that needs no mind or thought at all. Yet when we join these agents in societies--in certain very special ways--this leads to true intelligence.

The format of this book beautifully complements Minsky's style of writing, with each chapter functioning as a building block leading towards greater understanding.

In many ways the size, shape, layout and design of "Society of the Mind" remind me of the history books by Edwin Tunis that gave me endless hours of pleasure as a child.

Like Tunis's books, Minsky's one or two page long chapters, black and white illustrations and use of fascinating quotations to introduce each new concept, gently lead the reader forward to greater comprehension of a highly complex subject.

Although this is a book for a sophisticated audience, it is one that could easily be given as a gift to an inquisitive child or young adult who will ponder over it and learn to think big thoughts.

Kim Burdick
Stanton, Delaware
4 people found this helpful
✓ Verified Purchase

Cornucopia of Ideas

This book is different from other books on many different levels.

For instance, its organization parallels its subject. The main theme of the book is that the mind consists of a network of non-intelligent agents organized into more and more highly complex agencies. - That the function of these agencies taken together is to perform the mind (mind is thought of as a process and not a thing - the same way that in chemistry fire was eventually recognized as being a process of oxydizing and not an element). In the same way, each page of the book has a single concrete point to make or an idea to present. These ideas reference each other in a networked fashion. As a result, the meaning of the book emerges as you consider the networks of these ideas. This is a very aesthetically pleasing concept for organizing a book as it is informative.

The book introduces a lot of different ideas to its readers - such as the currencies that the mental agencies use to measure the importance of various tasks or views. It talks about the educational value of humor - in a way I never considered. It talks about organization of memories.

One thing that I thought was oversimplified was Marvin Minsky's description of how genes affect the development of the brain. He seems to treat the subject as though the environment has nothing to do with it. For example, we know that mother's consumption of alcohol can drastically affect the development of the fetal brain. What mother eats during pregnancy and even before can also affect the development of the brain. For example, consumption of Omega-3 fats seems to have a very strong effect on the brain development. The brain change is also directed by the experiences that the person has. Vision will not develop in an infant that has not been exposed to light in its first weeks of life. I think that this is a rather weak point of the book.
4 people found this helpful
✓ Verified Purchase

Better than Freud

A clean basis for a new platform for understanding human psychology.
This is the way that computer science is also evolving. What Windows 98 and Linux lack is a personality to weave all the processes together.
The integration is the hard part.
4 people found this helpful
✓ Verified Purchase

Five Stars

Marvin Minsky, no need comment
3 people found this helpful
✓ Verified Purchase

A font of meta-wisdom for anyone interested in artificial intelligence

If intelligent machines exist someday, no doubt they will think of Marvin Minsky, a pioneer at MIT's Artificial Intelligence Lab, as one of their progenitors.

This book reads as a collection of foundational ponderings from a luminary in the field, distilled to their instructive essence. Minsky assumes that a "mind" is not a single, efficient decision-making machine, but a collection of many such machines, which he calls agents. Each agent, on its own, is simple and specialized, but through a process of cooperation and competition with one another, the agents form a collective understanding of the world. This gives rise to an emergent intelligence guided by preprogrammed assumptions, but not constrained by them.

Minsky writes with a disarming simplicity, using the whimsical but versatile example of a child playing with building blocks to illustrate his points. How does a child's brain manage the many parallel, interrelated tasks inherent in such play, from fine motor control, to hand-eye coordination, to physical intuition, to forming structural knowledge from experience, to remaining motivated amid competing desires, to deciding when to knock the block towers down? Each chapter is a self-contained one or two page essay, exploring a different facet of how a network of separate, co-evolved agents might implement decision-making, pattern-recognition, task delegation, memory, learning, and problem-solving.

This is not a practical book to help you write AI software, to teach you about A*, Bayesian networks, or other relatively concrete techniques. It's a book to get you to think about what intelligence really is. But I think that any software developer with a genuine interest in the possibility of machine learning and machine thought will find both illuminating and stimulating. While Minsky's zenlike thoughts are focused more on abstract ideas than technical details, they offer valuable starting points for breaking the densely tangled problem of "intelligence" down into more manageable layers and processes.

The essays that comprise each chapter build on each other in a somewhat linear order, but their shortness invites non-linear contemplation, which seems an appropriate way to explore the nature of thought. I think that some of Minsky's more advanced concepts amount to speculative castles in the air -- ideas that seem only loosely derived from neuroscience and might not pan out in real AI frameworks. Still, this 1988 book deals with questions so far ahead of the 2011 state of the art, that its value is less in the end product of its thought experiments, and more in the experiments themselves. This isn't a work about artificial intelligence, but about how to think about artificial intelligence, and it's a defining classic. As a software developer, I look forward to proceeding into the world he envisioned.
3 people found this helpful
✓ Verified Purchase

The closest theory about the mind

I think this book is the closest and encomassing theory about the workings of the mind, as compared to other theories like neural network, expert systems in the field of AI. The book is full of ideas with an underlying general theory about the mind. This book makes me feel that the brain is like a huge organisation with differnt areas being specialised and communicating to each other, and resolving differences, while meeting its objectives.
I wish there was a sequel book explaining an implmentation model of Society of Mind.
3 people found this helpful
✓ Verified Purchase

The original

Minsky is the first person to really layout clearly that it is possible to develop a synthetic intelligence.
He explains more or less how he would build a brain, without worrying too much about how ours actually work,
but shows that his model fits well with casual observation.

From a technical perspective his ideas have been to some extent superseded by others like Kurzweil and Hawkins, but he was really the first to layout that this can be done, and that our minds truly do function as a society of individuals rather than a single cohesive whole.
2 people found this helpful
✓ Verified Purchase

Reasonable instead of AI

Marvin Minsky are great scientist, but he and the crowd are mistaken.
Intelligence can't be artificial. Only system could.
My definition for reasonable system is
"System is reasonable if it could define it's own behavior by own representation of the World accessible to it."
Cognition is an illusion. No entity could be named "Conscious". That term could serve only as tag on the systems satisfying given above definition. My E-mail: [email protected]
2 people found this helpful