Login
LoginFollow

Extended Intelligence

Joichi Ito

March 01, 2016

*This is based on an ongoing conversation at the Media Lab and is a compilation of thoughts from conversations with the faculty, students and researchers at the MIT Media Lab. Mostly written by Joichi Ito with help from Kevin Slavin and the rest of the Media Lab.*We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren't we just adding new pieces to the EI that every actor in the network is a part of?

[1]
Mitch notes that one of the very early mission statements resonates with this idea of humans and machines. "Enabling technologies for expression and understanding by people and machines"
[2]
"The Extended Mind". Analysis. Vol. 58. (1998): Num. 1. 7-19. [http://www.jstor.org/stable/3328150?seq=1#page_scan_tab_contents] Inspired by the paper The Extended Mind
[3]
"The Open Mind Common Sense Project". KurzweilAI.net. (2002): [http://web.media.mit.edu/~push/Kurzweil.html]

Artificial Intelligence has yet again become one of the world’s biggest ideas and areas of investment, with new research labs, conferences, and raging debates from the main stream media to academia.
We see debates about humans vs. machines and questions about when machines will become more intelligent than human beings, speculation over whether they’ll keep us around as pets or just conclude we were actually a bad idea and eliminate us.
robot.gif
There are, of course, alternatives to this vision, and they date back to the earliest ideas of how computers and humans interact.
In 1963 the mathematician-turned-computer scientist John McCarthy started the Stanford Artificial Intelligence Laboratory. The researchers believed that it would take only a decade to create a thinking machine. \n Also that year the computer scientist Douglas Engelbart formed what would become the Augmentation Research Center to pursue a radically different goal — designing a computing system that would instead “bootstrap” the human intelligence of small groups of scientists and engineers.\nFor the past four decades that basic tension between artificial intelligence and intelligence augmentation — A.I. versus I.A. — has been at the heart of progress in computing science as the field has produced a series of ever more powerful technologies that are transforming the world. John Markoff
But beyond distinguishing between creating an artificial intelligence (AI), or augmenting human intelligence (IA), perhaps the first and fundamental question is where does intelligence lie? Hasn’t it always resided beyond any single mind, extended by machines into a network of many minds and machines, all of them interacting as a kind of networked intelligence [4]
"Networked Intelligence". Pub Pub. (2016): [http://pubpub.media.mit.edu/pub/networked-intelligence]
that transcends and merges humans and machines?
If intelligence is networked to begin with, wouldn’t this thing we are calling “AI” just augment this networked intelligence, in a very natural way? While the notion of collective intelligence and the extended mind are not new ideas, is there a lens to look at modern AI in terms of its contribution to the collective intelligence?
We propose a kind of Extended Intelligence (EI), understanding intelligence as a fundamentally distributed phenomenon. As we develop increasingly powerful tools to process information and network that processing, aren't we just adding new pieces to the EI that every actor in the network is a part of?
Marvin Minsky conceived AI not just as a way to build better machines, but as a way to use machines to understand the mind itself. In this construction of Extended Intelligence, does the EI lens bring us closer to understanding what makes us human, by acknowledging that what part of what makes us human is that our intelligence lies so far outside any one human skull?
At the individual level, in the future we may look less like terminators and more like cyborgs; less like isolated individuals, and more like a vast network of humans and machines creating an ever-more-powerful EI. Every elements at every scale connected through an increasingly distributed variety of interfaces. Each actor doing what it does best -- bits, atoms, cells and circuits -- each one fungible in many ways, but tightly integrated and part of a complex whole.
While we hope that this Extended Intelligence will be wise, ethical and effective, is it possible that this collective intelligence could go horribly wrong, and trigger a Borg Collective hypersocialist hive mind? [5]
Such a dystopia is not averted by either building better machine learning, nor by declaring a moratorium on such research. Instead, the Media Lab works at these intersections of humans and machines, whether we’re talking about neuronal interfaces between our brains and our limbs, or society-in-the-loop machine learning.
Where the majority of AI funding and research is to accelerate statistical machine learning, trying to make machines and robots “smarter,” we are interested in the augmentation and machine assistance of the complex ecosystem that emerges from the network of minds and our society.
benidorm06.jpg
Advanced Chess is the practice of human/computer teams playing in real-time competitive tournaments. Such teams dominate the strongest human players as well as the best chess computers. This effect is amplified when the humans themselves play in small groups, together with networked computers.
The Media Lab has the opportunity to work on the interface and communication between humans and machines–the artificial and the natural–to help design a new fitness landscape [6]
[https://en.wikipedia.org/wiki/Fitness_landscape] In evolutionary biology, fitness landscapes or adaptive landscapes (types of Evolutionary landscapes) are used to visualize the relationship between genotypes and reproductive success.
for EI and this co-evolution of humans and machines.
EI research currently includes:
  • Connecting electronics to human neurons to augment the brain and our nervous system (Synthetic Neurobiology and Biomechatronics)
  • Using machine learning to understand how our brains understand music, and to leverage that knowledge to enhance individual expression and establish new models of massive collaboration (Opera of the Future)
  • If the best human or computer chess players can be dominated by human-computer teams including amateurs working with laptops, how can we begin to understand the interface and interaction for those teams? How can we get machines to raise analysis for human evaluation, rather than supplanting it? (Playful Systems)
  • Machine learning is mostly conducted by an engineer tweaking data and learning algorithms, later testing this in the real world. We are looking into human-in-the-loop machine learning [7]
    "Mixed-Initiative Real-Time Topic Modeling \&\#38; Visualization for Crisis Counseling". ACM, (2015): 417--426. [http://doi.acm.org/10.1145/2678025.2701395]
    [8]
    "Interactive learning with a “society of models” ". Pattern Recognition . Vol. 30. (1997): Num. 4. 565 - 581. [http://www.sciencedirect.com/science/article/pii/S0031320396001136]
    , putting professional practitioners in the training loop. This augments human decision-making and makes the ML training more effective, with greater context.
  • building networked intelligence, studying how networks think and how they are smarter than individuals. (Human Dynamics Group)
  • developing humans and machine interfaces through sociable robots and learning technologies for children. (Personal Robots Group)
  • develeloping “society-in-the-loop,” pulling ethics and social norms from communities to train machines, testing the machines with society, in a kind of ethical Turing test. (Scalable Cooperation)
  • developing wearable interfaces that can influence human behavior through consciously perceivable and subliminal I/O signals. (Fluid Interfaces)
  • extending human perception and intent through pervasively networked sensors and actuators, using distributed intelligence to extend the concept of “presence.” (Responsive Environments)
  • incorporating human-centered emotional intelligence into design tools so that the “conversation” the designer has with the tool is more like a conversation with another designer than interactions around geometric primitives. (e.g., “Can we make this more comforting?”) (Object-Based Media)
  • developing personal autonomous vehicle (PEV) that that can understand, predict, and respond to the actions of pedestrians; communicate its intentions to humans in a natural and non-threatening way; and augment the senses of the rider to help increase safety. (Changing Places)
  • providing emotional intelligence in human-computer systems, especially to support social-emotional states such as motivation, positive affect, interest, and engagement. For example, a wearable system designed to help a person forecast mental health (mood) or physical health changes will need to sustain a long-term non-annoying interaction with the person in order to get the months and years of data needed for successful prediction.[9]
    "Affective Computing's Publications". [http://affect.media.mit.edu/publications.php]
    (Affective Computing)
  • (Camera Culture Group) is using artificial intelligence and crowdsourcing for understanding and improving the health and well-being of individuals.
  • The Macro Connections Group is collaborating with the Camera Culture Group on artificial intelligence and crowdsourcing for understanding and improving our cities.
  • Macro Connections has also developed Data Viz Engines such as the OEC, Dataviva, Pantheon, and Immersion, which served nearly 5 million people last year. These tools augment networked intelligence by helping people access the data that large groups of individuals generate, and that are needed to have a panoptic view of large social and economic systems.
  • collaborating with Canan Dagdeviren to explore novel materials, mechanics, device designs and fabrication strategies to bridge the boundaries between brain and electronics. Further, developing devices that can be twisted, folded, stretched/flexed, wrapped onto curvilinear brain tissue, and implanted without damage or significant alteration in the device's performance. Research towards a vision of brain probes that can communicate with external and internal electronic components.
The wildly heterogeneous nature of these different projects is characteristic of the Media Lab. But more than that, it is the embodiment of the very premise of EI: that intelligence, ideas, analysis and action are not formed in any one individual collection of neurons or code. All of these projects are exploring this central idea with different lenses, experiences and capabilities, and in our research as well as in our values, we believe this is how intelligence comes to life.

References

[1]
Mitch notes that one of the very early mission statements resonates with this idea of humans and machines. "Enabling technologies for expression and understanding by people and machines"
[2]
"The Extended Mind". Analysis. Vol. 58. (1998): Num. 1. 7-19. [http://www.jstor.org/stable/3328150?seq=1#page_scan_tab_contents] Inspired by the paper The Extended Mind
[3]
"The Open Mind Common Sense Project". KurzweilAI.net. (2002): [http://web.media.mit.edu/~push/Kurzweil.html]
[4]
"Networked Intelligence". Pub Pub. (2016): [http://pubpub.media.mit.edu/pub/networked-intelligence]
[5]
[6]
[https://en.wikipedia.org/wiki/Fitness_landscape] In evolutionary biology, fitness landscapes or adaptive landscapes (types of Evolutionary landscapes) are used to visualize the relationship between genotypes and reproductive success.
[7]
"Mixed-Initiative Real-Time Topic Modeling \&\#38; Visualization for Crisis Counseling". ACM, (2015): 417--426. [http://doi.acm.org/10.1145/2678025.2701395]
[8]
"Interactive learning with a “society of models” ". Pattern Recognition . Vol. 30. (1997): Num. 4. 565 - 581. [http://www.sciencedirect.com/science/article/pii/S0031320396001136]
[9]
"Affective Computing's Publications". [http://affect.media.mit.edu/publications.php]
This section must add references to the HITL work with respect to crisis counseling, measuring self-harm and cardiolinguistics currently done at the lab. The AI2 talk and the HITL work at the lab are quite different from each other.
Can you provide the appropriate citations Karthik?
Sure. Here they are. Also, though there aren't any citations yet, a blurb that principles of mindfulness and Bayesian inference will be looked at together!
1 2 3 4 5
Thanks. Added two of the citations. Let me know if these are OK.
The Center for Bits and Atoms is developing declarative design tools and robotic assemblers of micro to macro discrete continuum materials. Computation and robotic assembly will enable discovery of otherwise unreachable design spaces.
I'm really intruiged by the intersection between Learning (à la LLK & Papert) and AI. Minskian AI was at its core about understanding cognition (building machines to model it). Learning, is also about understanding cognition—building our own minds.
How can both sides benefit by connecting to other areas of inquiries about cognition? (e.g. embodiment, mindfulness, the arts)
Yes! Minsky conceived AI not just as a way to build better machines. He wanted to use machines to understand the mind itself. What makes us human? Who and what are we?
The "extended mind" is an excellent notion, because it acknowledges our embodiment as social, biological and cultural beings, and it emphasizes the interactions between humanity, design, technology and ethics. Of course we are also more than the interface between our tools, society and our therapist. Understanding the mind itself is going to be a crucial part of understanding how we interact with our environments and with each other. I wonder if this should be reflected in this effort, too. For instance, Shoshanna's work in Playful Systems uses games to study motivational traits. The Media Lab also explores art as a way to learn about experience and perception, and many people seem to be interested in cognition itself.
Tried to integrate this.
The majority of Affective Computing's work relates to AI and machine learning, but I don't see it on this list.
Can you share some links?
To name a few...
-Ehsan Hoque's work on MACH, a machine intelligence that recognizes non-verbal social cues and gives feedback to the user about how to improve them: http://affect.media.mit.edu/pdfs/13.Hoque-etal-MACH-UbiComp.pdf
-Building a broader social intelligence: predicting which presidential candidate a person will vote for based on video of their facial reactions to a debate: http://affect.media.mit.edu/pdfs/13.McDuff-etal-Measuring.pdf
-Rob Morris's work on crowd-sourcing psychological counselling: http://affect.media.mit.edu/pdfs/14.Morris-Thesis.pdf
With many more available at http://affect.media.mit.edu/publications.php
Thanks. Added to list and a link to the whole publication list in the references.
Specifically, much of our work has focused on understanding affect to better facilitate human/machine communication
What if we applied some concepts of extended intelligence to the new legal hackers and digital law movement to express statutes and regulations in computational ways. I predict even a little success would be unique, of high impact and (at least apparently) magical. Some thoughts: https://fold.cm/read/dazzagreenwood/lawgometric-code-vjGxW5dv
Thinking about this more, we may have gotten the dystopia to avoid wrong. Everybody thinks AI gone bad as being ‘Terminator’ or maybe even ‘Colossus’ (have you seen that film? If not, I recommend it highly - already from 1969 - link below) or maybe the manipulative ones like HAL or The Matrix. http://www.amazon.com/Colossus-Forbin-Project-Eric-Braeden/dp/B0003JAOO0/ref=sr_1_1?s=movies-tv&ie=UTF8&qid=1455552538&sr=1-1&keywords=collosus
But the way things go bad in the human-in-the-loop scenario runs more along the lines of the Borg Collective from Star Trek (I’m sure you all know that one) or Landru from the original series (http://memory-alpha.wikia.com/wiki/The_Body_of_Landru) - people being essentially teleoperated into an ultimate totalitarianism. The Borg were bad because of their extreme socialism and the desire to endlessly expand. Laudru meant well, but took his mission too seriously and was a narrow ‘autistic’ AI. Hence this ignites a ton of speculation and debate - what is the role of the individual in such a soup of human and machine? What is lost and what is gained - can we somehow protect the role of the individual when we’re all so connected, or will my ego go the way of the dodo?
This may be all wrongheaded - e.g., if we’re destined to become agents running somewhere, the physical manifestation may not matter as much as getting backed up' in enough places - but it's the natural argument where such ideas hit nightmares.
Tried to add this to the third paragraph.
I very much enjoy how this article attempts to touch base on the wider topics AI. Always refreshing to hear the MediaLab step up and shake the box a little bit. That being said, I can't help but find some dark humour in the way we speak about "augmented brain" and "smarter computer" with little discussion regarding the metric by which we evaluate those fascinating topics and system. More to the point, despite our many attempts at trying to mimic or extend the human mind, very little attention has been given to the many plague it (and consequently we) suffer from; namely depression, cognitive biases, lack of compassion, selection of evidence to satisfy previous notion, etc. In other words, we are building data-driven decision-making tools under the assumption that the human minds around us are willing to accept the conclusions. Something akin to the difference between Knowledge and Wisdom. Extended-inteligence sounds great. Perhaps it would also be worth diving into Collective-Wisdom (CW). Anyone interested or am I just rambling?
Could be relevant to refer to Licklider 1960 paper on Man Computer Symbiosis. Also the concept of general AI seems to have made more of a comeback with the popularity of general purpose methods such as deep learning. Some of the more longer term questions are how societies will adapt and our own concept evolves of what it is that makes us human as AI/IA/EI progresses - how will we see ourselves in say 50 years. And what does this mean for related fields such as artificial creativity / creativity augmentation?
Great discussion here, thank you. Thinking along similar lines, we just published an extensive piece on "CreativeAI". It includes in-depth analysis, narrative and vision for the space between human, machine and creativity: https:[email protected]/creativeai-9d4b2346faf3 curios to hear your thoughts.
Always interesting to see the way academic philosophy and engineering/tech interact, or, more often, don't. There's been a ton of work done in philosophy on extended cognition since Clark and Chalmers in '98; I'm not sure who the "we" is in "we propose a kind of extended intelligence"... Of course, it's more often that philosophers haven't caught up with technology, but as someone with a foot in both worlds it's a little strange to see an article here that wouldn't look out of place in a copy of Synthese from 2005.
The most interesting questions in my mind are around all the familiar terms that need to be redefined in light of an extended cognition hypothesis. Can the self or mind be divorced from intelligence? Is there room for a self at all, or will that dissolve as we communicate at increasingly higher bandwidth with our social networks and machines? Who is to blame when a cognitive network does something immoral? (do normal rules of morality even apply?) At a certain point, this train of thought leads to viewing the universe as a single, vast network, with any subdivision of interacting parts being arbitrary. Are there any boundaries we should draw on what makes us intelligent? (After all, we are constantly interacting with every body in the known universe.) And if not, is intelligence a useful quality to define?
We could consider Knowledge Games a type of "extended intelligence" -- in situation where the play of a game helps us extend our ability to solve complex problems and produce new knowledge. See more at: jhupbooks.press.jhu.edu/content/knowledge-games
I was under the impression that unlike previous comments here and the written document, that Extended Intelligence was primarily and objectively about taking lessons learned from AI, interaction design, and cognitive science to /improve ourselves/ through some type of augmentation. It's not really about understanding cognition (much like AI was agnostic to actual cognitive mechanisms), but about using what we now know as tools to self improve. I think the central claim is not on a "distributed" or "cognitive" intelligence ala Rumelhart but on a positivist position that the Media Lab is taking?
greater context
The question of context here is more complicated than that, right? I mean, this is about doing it with greater context, but not merely 'with greater context', the representation of context in the emergent properties of the human-machine system seems, to me, as a key component.
With respect to the method of coactive learning, as presented in the work referenced, it seems that it merely introduces a mechanism for extracting context from user's implicit feedback, but the process is heavily iterative, and relies on some assumed 'fixed' or at least loaclly-fixed of context which, to me, seem like the wrong direction
.[3]
Also Minsky's 'common sense' problem definition
(http://web.media.mit.edu/~push/Kurzweil.html)
collaborate seamlessly
Seamless collaboration is actually the core idea on which Hiroshi became faculty at the Media Lab , but I don't see TMG on this list Of course, that was about human/human collaboration through machines, a different relationship between man and machine but a relationship nonetheless.
A bigger question is what is the relationship between AI and HCI and how they can mutually inform each other. Both care about the intersection of human and machines, but (at least traditionally), they have differed in their core values.
Interestingly, one of the important pioneers in AI, Terry Winograd, who made SHRDLU, crossedover to HCI. Here's an interesting short article he wrote 10 years ago on the distinctions between AI and HCI, which he attributes to a matter of core values. https://hci.stanford.edu/winograd/papers/ai-hci.pdf In a nutshell, it goes back to opposing philosophical values—Rationalist vs Phenomemological.
Winograd actually cites a debate between Ben Shneiderman vs. Pattie (!) on direct manipulation vs interface agents. And actually, I believe that Pattie did a lot of seminal work on agent-based interactions back in the day. Fluid Interfaces was called Ambient Intelligence (AI!) back when I was a UROP at the lab. From "intelligence" to "interface"... I'd be curious to hear from Pattie what prompted the shift in focus.
Anyway, I feel like it's time for AI and HCI to sit at the same table, at least to exchange some ideas
In that context there is of course also Human/Human collaboration that is mediated by an AI, which doesn't resolve all of the conflicts, but at least allows to work around them with some ease.
Computer voice interfaces are a good example, on their own they are HCI, but an 'ideal' (perfect, real-time) one can be coupled into a universal translator, which is then a seamless AI turned human/human interface.
eliminate
eliminate
If you aren't familiar my favorite reference in this space is Roku's Basilisk, similar to Pascal's Wager. http://rationalwiki.org/wiki/Roko's_basilisk
is researching affect and how to better facilitate human/machine communications
Affective computing is researching how to provide emotional intelligence in human-computer systems, especially to support social-emotional states such as motivation, positive affect, interest, and engagement. Understanding and responding intelligently to human emotion is vital for fostering long-term successful interactive learning- whether a system is trying to help a human learn and sustain his/her motivation, or whether the human is trying to help the computer learn, without getting annoyed by the computer (e.g. by its incessant need for the human to explain things). For example, a wearable system designed to help a person forecast mental health (mood) or physical health changes will need to sustain a long-term non-annoying interaction with the person in order to get the months and years of data needed for successful prediction.
Hasn’t it always resided beyond the single mind extended by machines into a network of many minds and machines interacting as a kind of networked intelligence that both transcends and merges humans and machines?
This type of questioning makes the author sound naive when it is about a topic where there is an extensive literature. It sounds like the author is discovering what others already know. A more assertive way of communicating this is not to rethorically ask whether intelligence resides in networks, but to cite four or five examples of different traditions that have already made that claim.
If you want to keep the examples close to the tradition of the lab, then Minsky's Society of Mind and my Why Information Grows are two examples of books that focus on collective intelligence. The essay I shared with faculty mailing list also has examples, like Hayek's "use of knowldege in society", which are well known. Also I did a chapter on something similar in a chapter for a book by John Brockman a few years ago: http://edge.org/response-detail/26176
Finally, there is also all of the cumulative culture ideas of Boyd, Richerson, and Henrich, that talk directly about this distributed intelligence.
Tried to add this notion in a single line. You should write a linked Pub about the history of collective intelligence so I can link to it.
As Artificial Intelligence yet again becomes one of the world’s biggest ideas and areas of investment, we see debates about humans vs machines and questions about when machines will become more intelligent than human beings, and whether they’ll keep us around as pets or just think we were actually a bad idea and eliminate us.
I would unpack paragraph one into at least two sentences. The first one, is the motivation: an increase in the number of debates about AI. The second one, what these debates are about.
and more like a vast network of humans and machines creating an extended intelligence,
It sounds to me like this is phrased as a "new" thing, when in reality, our abilty to create increasingly larger, and smarter, groups, is an extension of a process that has been quite continous and ongoing for tens of thousands of years. Some would say newtorked intelligence started with the cognitive revolution and the invention of human language (see E.O. Wilson, The Social Conquest of Earth, or Yuval Harari, Sapiens).
The Camera Culture Group is working on several projects that use artificial intelligence and crowdsourcing for understanding and improving our cities, as well as the health and well-being of individuals.
I feel a bit sidelined here, because the research on urban perception is research that I ideated, started, and for which I am the PI. But the fact that I am working with a student from Camera Culture (Nikhil, with whom I spend vast amounts of time with, and he is great), makes this a Camera Culture project.
specific work include:
I recently wrote and published an entire book on the computational capacity of societies and economies, and on how that computational capacity is expressed (and therefore can be measured), by looking at the outputs that an economy produce. The book is Why Information Grows.
include:
Do we have room for the Data Viz Engines we build at Macro and that millions of people are using every year to make decisions? Between the OEC, Dataviva, Pantheon, and Immersion, we served nearly 5 million people last year only. These data viz engines are being used to make economic decisions by entrepneurs looking for commercial destinations (we get tons of emails of people like that in the OEC). DataViva has also been used by development banks in Brazil to prioritize their development loans (so actual monetary decisions were aided by the vizs). I would say these tools augment networked intelligence by helping people access the data that large groups of individuals generate, and that are needed to have a panoptic view of large social and economic systems.
just augment this networked intelligence in a very natural way?
-> "just naturally augment this networked intelligence"
is yet again becomes
typo: "yet again becoming" or "again"
pioneered the work on using AI for understanding and improving our cities now being conducted in Camera Culture Group.
The work involves a collaboration between the groups (technically between me and Nikhil Naik). The work has not been passed on to another group. I assume that collaborations between groups are ok
OK. Tried to reflect that.