Here's a brainstorming technique based on deconstruction. See DeconstructionExample
The goals of the brainstorm are to uncover hidden assumptions and devise new ways to move forward. The starting assumption is that you're bothered by the status quo or existing solutions, but you can't quite articulate what's wrong.
- Explain deconstruction via an example. I use Agre's argument about AI in Computation and Human Experience. (See below.) DeconstructionExample might also work for a patterns audience. Then explain the steps of the brainstorm.
- In one big group, get people started listing privileged/marginalized pairs. Stop after filling one sheet of flipchart paper.
- Break into groups of two. Each group keeps generating pairs until they find one they want to focus on.
- For that opposition, they ask these questions:
- What could a world be like in which the marginalized element were privileged?
- Are there ways in which the marginalized element keeps popping up into view, no matter what effort is made to disregard it? Should that be "the privileged element"? (No. For instance, you might discover that the privileged concept works because it depends on qualities that supposedly belong to its opposite. By way of illustration: in DeconstructionExample's last paragraph, I sketch how Morningstar's arguments against deconstruction rely on the very properties he objects to.)
- After the groups begin to wind down, have the groups of two coalesce into groups of four. Each half explains their idea to the other. Each half helps the other expand on their idea.
- The whole group reconvenes. Volunteers present ideas they're fond of.
(very condensed - see the book):
Conventional AI research is based on some binary oppositions. One is mind/body. The mind is where the action is; the body is just meat that carries out the instructions of the mind. Another is planning/acting. What's important is planning a path to a solution. This entails building a model of the world in the mind. After the model is manipulated to uncover a solution, the body acts to carry the solution out. The actions are rudimentary; at its most intelligent, the "execution unit" is only allowed to notice that the plan has gone wrong, whereupon the planner kicks back in to correct the situation. So routine reactions to events are marginalized.
Agre inverted the hierarchy, asking what if most of what happens were routine and reaction, if planning were only a last resort when more efficient reactions don't work, if there were no world model or at least as rudimentary a world model as could be gotten away with.
Agre was also made more aware of how the difficulties of executing plans complicated planning research. The body will not be denied, the complications of the world intrude, but their marginalization forces AI to locate all solutions in the mind. The feel is of the marginalized continually pushing their way into view, only to be swept aside as quickly as possible.
This way of thinking gave Agre's research a new direction.
Of course he could be post-rationalising his own process too. Perhaps he looked at the real world just once (unusual for an academic) and realised that the mind-body division wasn't there. Deconstruction is useless without this "reality check", and is made redundant by it. My niece does it just great by asking "why?" about everything. She has no preconceptions, so doesn't need to invent "deconstruction" to externalise her mistakes.