Category Archives: Physical Computing

Final Project BOM and Schedule

BOMScreenshot

So now the work really begins.

I have finally settled on a design for my synthesizer (and I still have to come up with a cool name, though “clusterf#@kaphone” springs immediately to mind). It will be an electric guitar body (already in disrepair, I feel it’s bad mojo to injure a working instrument), driving five walkmen playing looped cassettes, outputting to five sets of speakers. The default loops will be recordings of the notes of the bottom five open guitar strings – E, A, D, G and B. The tone will be modulated by the player touching a linear potentiometer mounted to the neck, one for each “string”/walkman, and the on/off/volume will be manipulated with a force-sensitive button (again, one for each “string”, hopefully harvested from an old touch-tone telephone) in the right hand position (where someone playing a real guitar would pick or strum). Additionally, if it is at all feasible, I will add a toggle switch for each “string” that allows the player to loop his or her own input – i.e. to record tone and volume instructions that can then be looped while other “strings” are being actively played. I feel like this should be easy enough to accomplish, given that while the output is purely analog (tape in walkman), all the parameters (tape speed/tone, volume) are being processed in the Arduino.

CphoneDiagram

As far as I can tell, the workflow will be something like this:

  1. Obtain parts
  2. Conduct final tests of controls and outputs (especially regarding motor control of walkmen and durability of tape loops)
  3. Remove frets, bridge and pickups from guitar and begin to mount sensors; determine where wires go
  4. Perform surgery on walkmen and make tape loops; design and build cabinet for walkmen and speakers
  5. Write code
  6. Assemble instrument
  7. Paint and finish instrument

This coming week should yield steps 1-3; the following 4 and 5, and then 6 and 7 the week after.

I can think of many, many things that could go wrong, most notably that I might have to shape a new neck if there’s not enough room on the real one, or (much worse) that I might not be able to make the motors in the walkmen behave civilly enough to output actual tones you would actually want to hear. But time and work will tell, and I remain optimistic.

Mostly, I’m hoping to invite some of the very talented musicians I’m friends with to make music with this thing when it’s ready. By using cassette loops, there’s a degree of versatility that lets the player set five sounds he or she wants to work with, which don’t need to be the notes of a standard tuning, or even notes at all. I can imagine drums, found sound, voices talking, voices singing…

But first, there’s work to do.

Midterm Project Proposal

For my “stupid pet trick,” I would like to build further into my last mini-project, my not-quite meditation device. I am really intrigued by the idea of user-supplied inputs that the user may or may not understand, but over which he or she has some degree of actual and ultimate control.

As we discussed in class, breathing is maybe the best example of this, since it can be either conscious or unconscious. The most data-rich and least intrusive way I can think to measure this is by means of a magnetometer and a strong magnet, worn on the body in such a way that breathing moves them in relation to each other.

Other possible factors to measure might be the user’s position in relation to the device, perhaps some intentional act (the squeeze ball, for example) or…

In playing around with sensors this week, trying to figure out what reads what (and how), and what if any “off-label” uses each might have, I noticed some interesting interference when I held my piezo element in hand – a very rapid cycle of noise that went away when I put it down. I thought for a moment it might be my heartbeat, accidentally being captured, but it was way too fast. I then thought, perhaps it’s not the piezo that’s reading this but the wires themselves – I remembered my best friend’s TV when we were teenagers, which exploded in snow if you stood in a certain place, but calmed down when you put your hand on it. I unplugged the piezo, held the alligator clips in my hand, and the noise was still there – I placed them on the table, same distance apart, and nothing. So I’m assuming that’s my electrical field it’s reading, which would definitely be worth playing with further, though I’m not sure how individuated it is, and very unsure if it can be controlled.

But the idea would then be to take two or three of these factors (breath, field, position, for instance) and feed them into a program that creates an animation based on these, developing over time. The goal would be to somehow express something about the data themselves – not chaos but some (even very abstract) statement. Things like more agitated inputs warming the color palette or multiplying points of movement spring to mind, but I think most of the pleasure will be in experimenting to see what feels right.

And then a second, and much more interesting, goal would be to create it for two people – a nonverbal interaction that takes contributions from each to create a shared or partially-shared experience. But that is probably beyond the scope of the pet trick.

Not Quite a Meditation Device

So I’m stressed. Really really stressed. Schoolwork, work-work and all the noise of life.

After our Synthesis session on Friday, I began trying to devise my next little project for Physical Computing, and I realized that what I really want is a machine that will help me calm down. Remembering that meditation devices were among Tom Igoe’s greatest physical computing hits, I set about imagining what meditation device I would make for myself. I didn’t want it to measure something invasive like breath or sweat; I wanted it to be something a person could interact with voluntarily that would soothe through the interaction.

I thought about the stress relief balls that used to be on everyone’s office desk, and started thinking about using one to control a visual program. My thought was that I might squeeze a ball (which intrinsically feels good and relieves some stress) to drive a calming visualization.

But what aspect of the squeeze and what aspect of the visualization? It occurred to me that one of the things that helps me to calm down is deliberate, repetitive motion. I decided that the program would have the user set a rhythm (or rather, a time interval), then place a cue for the smoothest, most regular interpretation of that directly in the middle of the user’s field of vision. The goal would be to squeeze smoothly and make the cue disappear by matching one’s own actions to the computer’s ideal.

(And let me say here that I should have talked this out to myself – any place where there are words like “goal” and “ideal” is a terrible place to begin thinking about mindfulness… but I digress.)

I considered using a Hall Effect Sensor on one side of the ball and a magnet on the other to measure the “squeezedness” of the ball, but Tinkersphere was out of analog Hall sensors, and I realized an FSR taped to the surface might do the trick just fine, with maybe even more flexibility in terms of ball choice. I then went to the 99-cent store and bought a selection of squeezy balls. After a market survey of one potential user (my girlfriend), I decided that the “Guardians of the Galaxy” ball was best for size, firmness and overall feel.

I mounted the FSR to the outside with tape, wrote a program for the Arduino to gather pressure data, then wrote a program in P5 to pull that data and analyze the pressure of the squeeze and the period of fluctuation. It then calculated the average time between peaks, and set that as its “ideal” rhythm.

The program works with the user by coloring the background to match the squeeze – white for no pressure, black for max pressure. It then projects the “ideal” pressure at any given moment in a sort of sphere-like object in the center of the screen.

And I tried using it and realized that I had created the exact opposite effect from what I had intended. Far from it being easy to set my own rhythm to the computer’s, it was intensely difficult – I had made for myself a demanding and unsatisfiable taskmaster.

But at the same time, it was an oddly compelling interaction. I discarded it after the first evening as digital fascism (or something very like it) and then returned to try it again the next evening, just for the hell of it. Miraculously it was much easier, and still much more engaging than two shades of gray on a computer screen ought to be.

The problem, though, is the falsehood of the central premise, namely that there’s something to be gained by syncing one’s own motions to a computer’s suggestions. The thoughts I have (and they are many) center around two lines of inquiry. First, it would be much more interesting to use the same program to have two users sync their motions to each other, remotely – a weird wordless way for two friends or separated lovers to connect, or two strangers to meet.

The other is that it’s a great definable but essentially unpredictable input – the user has full control but not necessarily much understanding of that control. It would be fascinating to drive an animation based on parameters (pressure, period, regularity) supplied somewhat thoughtlessly by one (or, better, two) users, where nothing was clarified (i.e. “by doing thing X to the ball, thing Y will happen on screen) except through iteration.

Thoughts, thoughts… more to say coming soon!

Observation: Subway Kiosks

KioskHeader

The MTA’s subway information system (the “On the Go Travel Station”) tests the negative limit of what might be described as interactive. I hadn’t actually tried to use one before doing this observation, but in the interest of science, I walked up to one and touched its enormous and rather lovely screen.

The available options are very few, especially when compared with how the machine looks – new and nicely built, with good graphics, you expect it to be able to tell you anything you might want to know about the subway system, say perhaps the contents of the MTA’s website (full train schedules, bus info, upcoming planned service interruptions, etc.), retooled for the kiosk interface. But no. The kiosks (at Union Square, at least) give you precisely three possibilities – a map that gives you train information for the quickest way to your destination (either another subway station or one of a preset menu of points of interest):

KioskLineMap

a list of current service interruptions affecting the lines running through this particular station (but as far as I could tell, only lines running through this station):

KioskSvcChange

and a guide to elevators and escalators in the station. Additionally, there’s a pretty good local street map that opens if you click on the “i” icon over Union Square on the map that figures out quick train routes, but nothing that tells you that that will happen, and nothing that points you towards it if you’re looking for it.

KioskStMap

That’s very limited functionality for what these machines must have cost, and it showed on the faces of most of the people I saw using one. They would walk up, start pressing buttons, stand there certain they must be missing something, and then walk away.

When the machine is resting normally (i.e. nobody is using it) it displays advertising, and a helpful fourth function, a list of arrival times for the trains in the station. I believe it takes schedule data (the header of the list says “scheduled arrival times”) rather than real-time information, but that makes sense since only a relatively few lines have that capability so far.

Unfortunately there’s no way to actually call that screen up if you want to see it. Normally not a problem as it displays on rotation with the ads, but when there’s a service interruption, it replaces the resting screen with a solid service advisory screen:

KioskSvcAdv

with the consequence that now there’s no way to see the scheduled trains for the station. And the four functions become three once again…

But it was very interesting to watch actual humans try to interact with the machine. I noticed that a few people pulled up information they wanted and then pulled out their phones to take pictures. Mostly, though, it was just people scrolling through again and again, looking for something that wasn’t there.

I’m not sure if this limitation is just the first phase and they have plans and capabilities to add more features as the program rolls out, but the kiosks have been up for nearly two years now. It’s mystifying that they give less information than the platform bulletin boards do, but they are pretty to look at.

 

Small-Game Hunting in the Technological Junkyard

Or, I fought the machine, and the machine (has, temporarily) won.

In casting about for a good project using the Arduino’s analog outputs, I started letting my mind wander to one of its favorite pseudo-Luddite dwelling places, the thought that there are now generations upon generations of technology that we have abandoned as obsolete, but that have some unique traits that might be worth resurrecting, or, less pompously, that might be fun to play with. I decided the thing I wanted to do was to find an old machine, open it up, tweak it with the Arduino and make new life in the old shell.

And almost immediately the idea of the cassette deck became lodged in my head – an iconic piece of cultural technology that iterated through dictation, to “home taping is killing music,” to the Walkman, then to nothingness, replaced at every stage by more efficient digital methods.

I started thinking what I might do with a cassette deck. My mind wandered back to the awful racket of our Arduino sound experiment, with square waves of pure tone, and I started to think that one of the nice things about cassettes was that though they were noisy (hiss, flutter, wow) and not always very faithful, it was a smelly, dirty, human sort of noise, not the coldly rational death-blare of unmodified PWM. And I thought, let me make an analog synthesizer out of a cassette deck.

I looked around for an old Walkman, but after visiting several thrift stores, junk shops and antique stores, I couldn’t for the life of me find one. And then I walked into a 99 cent store, and sitting up at the top of the display behind the cash register was a Coby microcassette recorder in its original, now very yellowed, blister pack. I bought it (to the store owner’s tremendous surprise) and took it home.

My plan was fairly simple – I would record a simple tone using an online tone generator and the onboard microphone, and then modify the pitch by changing the tape speed, using the Arduino as power supply at as non-invasive a stage as would work.

And then I ran into my first snag. I popped in some batteries, put in the microcassette, hit play and record, and spoke into the microphone – “testing, testing.” And when I played it back, I got garbled nonsense.

I checked the batteries – 1.61 volts each. And then I plugged in a universal AC adapter, and got the same results as before – inconsistent speed, occasionally stopping completely, then going again.

But at this point it was too late to get another tape player through eBay, and besides, I had even less hesitation now about busting the thing open to get at its guts. I unscrewed the five visible screws, used my wire cutters to shear away the pieces of the plastic housing that were fixed to the board, and pulled out the circuitry and mechanism.

MCMechMCCirc

I wondered if there was perhaps something funky between the batteries and the motor making the motor run oddly, so I found the motor’s leads and put 3V across them ( the smallest charge my power supply could deliver) – it sounded very smooth. I turned it over, and realized that though the motor was turning fine, the spindle was not, even though I had the “play” button (or what was left of it) engaged. I looked closer, and saw that the tiny drive belt running from the motor to the spindles was moving chaotically, and that seemed to be the problem. If I had to guess, I would say that whatever processes of heat and moisture, up by the ceiling of the 99 cent store, that had yellowed the packaging, had also degraded the plastic in the drive belt and/or the spindle’s gears to the point that it no longer worked. Or perhaps it was just a lemon to begin with.

But I also noticed that “rewind” worked fine. And by a lovely engineering quirk of the microcassette recorder genre, it was theoretically still possible to play back in rewind mode. I decided to go ahead with the project and just give it a shot.

My plan then was to disconnect the motor from the rest of the circuit board and farm out control of that to the analog outputs of the Arduino. At the same time, I would power the rest of the circuitry (sound output) using the digital outputs of the Arduino. I would change the tone by modifying the speed of the motor (first experimenting to see what output number corresponded with what pitch) and allow rhythmic elements in by turning the circuitry on and off. Of course this posited that I would be able to run it well enough to record some tone onto the tape, but that seemed a secondary concern.

I got out my soldering iron, disconnected the motor, and soldered some non-tiny wires onto the ends of the leads. I then measured the resistance across the motor using a multimeter and soldered a fixed resistor of roughly similar rating to compensate for whatever power the motor might have drawn from the circuit. I soldered the speaker wires back to the internal speaker (they got disconnected in the great breaking-apart) and then ran 3V across the battery leads. A faint hum came through the speakers. I engaged the record head. The on-board LED came on!

Then I put the microcassette into its place, took my voltage and put it across the motor. The motor turned. I pressed “play.” Nothing happened. The drive mechanism had completely failed.

So I have essentially nothing to show for this project, though I think I am going to go ahead and order a couple of Walkmen off eBay, as they’re pretty inexpensive at this point, and I still think the idea is worth exploring. My hope is to begin by making a simple monotone synthesizer from one machine, and then try to gang together several to make a small “orchestra”. And of course, if I can get it to work correctly, the possibilities for both input (sensors, knobs, human interactions in general, as opposed to lines of code) and output (speech, ambient noise, instrument sounds as opposed to pure tone) are pretty exciting.

So watch this space!

 

 

 

 

Test Your Self-Awareness!

My project is a machine that tests not your strength per se, but your strength compared to how strong you think you are, hence the accuracy of your self-image.

It consists of three parts – an input device that you grip with your hand:

2015-09-22 01.20.50

A dial (potentiometer) to set the sensitivity of the machine:

2015-09-22 01.21.24

And a display to show you how well you did:

2015-09-22 01.21.38

I began work by just plugging in the flex sensor (in series with a 10-kilohm fixed resistor) and bending it as far as I could to see the range of values it returned (having written a program to send those values to the serial monitor). I then taped it to the grip, and made sure that I really could translate the action of the grip into a reliable-ish reading from the flex sensor. Blessedly, it worked. The only issue was a little bit of noise – the numbers would fluctuate a few points seemingly without much of a physical change to the sensor.

I then tried to write a program that would smooth out the noise, recognize an actual squeeze to the device, and hold the most extreme value for a short while. I realized that I had to measure the current reading against the last (a moment ago) reading, then set a timer when numbers started going back up (more than the few points of the noise, i.e. the squeeze was over), then reset all values for the next attempt.

I set a range of values going from what I considered a modest squeeze up to the strongest I could muster (interestingly, the weak squeeze was still more than halfway from no-squeeze to total compression). I set a different-colored LED for each possible value, along with the RGB LED that came with the Arduino kit to light up the “title” of the display.

2015-09-20 21.52.41

Next I added the potentiometer to the mix. My first inclination was to simply multiply the flex sensor input by a fixed fraction plus some minute fraction of the reading from the potentiometer (eg potValue/5120, to make something between 0 and .2), but when I tried it, the results were very, very strange. I opened up the serial monitor, looked at all my numbers, and quickly realized that the figures relying on the pot were not moving according to any discernible logic. I surmised (and this is still just a guess) that I was getting outside the edges of the Arduino’s math abilities, and that it didn’t like trying to shoehorn values around 1000 into a range of less than .2. I changed the numbers to multiply the sensor input by 10 and add the pot value more or less straight, and that worked like a charm.

I rummaged through the cardboard and paper recycling for housing that might fit my components, and was very happy to find the box from a bar of soap for the dial (not Dial soap, sadly) and a LU cookie box for the display.

2015-09-22 00.20.52

And lastly I created the skins for the dial and the readout in Photoshop, printed them out, and taped them in the appropriate places.

And then I tested my self-awareness!

Code:

const int flexSensor = A0;
const int potInput = A5;
const int blueRGB = 3;
const int greenRGB = 5;
const int redRGB = 6;
const int blueLED = 8;
const int greenLED = 9;
const int whiteLED = 10;
const int yellowLED = 12;
const int redLED = 13;

int flex = 0;
int a = 1000;
int b = 1000;
int c = 1000;
int base = 1;
int r = 1;
int y = 1;
int w = 1;
int g = 1;
int bl = 1;

int timer = 1;
int smoother = 0;
int potVal = 0;
int meas = 0;
void setup() {
// put your setup code here, to run once:
Serial.begin(9600);
pinMode(redRGB, OUTPUT);
pinMode(blueRGB, OUTPUT);
pinMode(greenRGB, OUTPUT);
pinMode(blueLED, OUTPUT);
pinMode(greenLED, OUTPUT);
pinMode(whiteLED, OUTPUT);
pinMode(yellowLED, OUTPUT);
pinMode(redLED, OUTPUT);
}

void loop() {
// put your main code here, to run repeatedly:
flex = analogRead(flexSensor);
delay(1);
potVal = analogRead(potInput);
delay(1);

analogWrite(redRGB, (sin(millis()/1000)*255));
analogWrite(blueRGB, (sin(millis()/1000 + 2)*255));
analogWrite(greenRGB, (sin(millis()/1000 – 2)*255));

if (millis() <= 100) {
c = a;}
base = c * 10 + 50 + potVal;
r = c * 10 + 40 + potVal * .8;
y = c * 10 + 30 + potVal * .6;
w = c * 10 + 20 + potVal * .4;
g = c * 10 + 10 + potVal * .2;
bl = c * 10;

if (smoother < 10) {b = flex;}

if (b < a) {a = b;}
if (b > (a+20)) {
smoother = smoother + 1;}

if (a * 20 > r && a * 20 <= base) {
digitalWrite(redLED, HIGH);}
else {digitalWrite(redLED, LOW);}
delay(1);

if (a * 20 > y && a * 20 <= r) {digitalWrite(yellowLED, HIGH);}
else {digitalWrite(yellowLED, LOW);}
delay(1);
if (a * 20 > w && a * 20 <= y) {digitalWrite(whiteLED, HIGH);}
else {digitalWrite(whiteLED, LOW);}
delay(1);
if (a * 20 > g && a * 20 <= w) {digitalWrite(greenLED, HIGH);}
else {digitalWrite(greenLED, LOW);}
delay(1);
if (a * 20 < g) {digitalWrite(blueLED, HIGH);}
else {digitalWrite(blueLED, LOW);}
delay(1);
if (smoother >= 10) {timer = timer + 1;}

if (timer >= 300) {
a = 1000;
timer = 1;
smoother = 0;
b = flex;
c = b;}
Serial.print(smoother);
Serial.print(” “);
Serial.print(potVal);
Serial.print(” “);
Serial.print(base);
Serial.print(” “);
Serial.print(r);
Serial.print(” “);
Serial.print(y);
Serial.print(” “);
Serial.print(w);
Serial.print(” “);
Serial.print(g);
Serial.print(” “);
Serial.print(bl);
Serial.print(” “);
Serial.print(c);
Serial.print(” “);
Serial.print(b);
Serial.print(” “);
Serial.println(a);

}

Magic Box

Errr… not really quite so magic, but what can you do? In the immortal words of Bob Dylan, there’s no success like failure.

My first impulse for this project was to use magnets to activate other magnets. For some reason, I have long been fascinated by magnetism, and the possibilities of relationships between permanent magnets and electromagnets. I suppose it has something to do with powerful invisible forces, and the possibility of setting up relationships that are reliable and predictable but unforeseeable to an audience.

It struck me that stage magic was a good context for playing around with this. I decided to build a very crude reed switch, activated by a “magic wand,” and use it to turn on an electromagnet that would then cause something to jump up, and sit back down when the switch was interrupted. But I found that the most robust magnets I could build with the resources at hand (after a couple nights of trying and a nastily burnt power supply) were woefully inadequate to the task of making anything move in an impressive way, so I abandoned that idea.

MagicWand MagicWandGuts

My next thought was to use fans to blow confetti around. Not quite the nice rhyme of using a magnet to set off a magnet, but still something that would look like a fun cheesy magic trick – stillness, then action, then stillness again. I also decided to enhance the effect by turning on “stage lights” along with the fans. So I picked up a couple of small 12V fans and a new power supply, and grabbed some rainbow mylar foil to cut into confetti.

I wired everything in a big cardboard box using this schematic:

MagicBoxSchematic

Wiring

using bright white LEDs I had lying around from earlier tinkerings. After a little trial and error and sparring with another very dodgy power supply, I got everything working!

MagSwitch

Unfortunately for some reason it hadn’t occurred to me that my tiny fans might not be very powerful. So the lights turned on, but the little handful of confetti I put in to test it went nowhere. It was the dullest magic trick in human history.

Panicked, I searched around for anything else I might use to do something (anything!!), and found a small 6V DC motor. I wired it through a 5V current regulator (since I had already wired the LEDs to run off 12V, I needed to stick with that as the overall voltage of the circuit), built a “propeller” from camera tape folded laterally in half, mounted it on the bottom of an old paper coffee cup, and made a new and much more powerful fan.

But still it didn’t really do very much moving of the air in the box, and I realized my confetti idea wouldn’t fly, literally or figuratively. I did still have my rainbow mylar though, so I cut a strip of it, mounted it to the “fan,” pinned it to the back wall of the box, and used that as the “act.”

I will say that the switch itself worked beautifully. The failure I experienced was really mostly a failure of imagination – that I found it hard to abandon the ideas that weren’t working and come up with others that would be meaningful and yet achievable with what I had at hand. At the end of the day, I would have been better off taking a step back and looking at the possibilities, rather than chasing one thought to the exclusion of all others.

 

Thoughts on Interactivity and Unknown Knowns

As I’ve gone though my first week at ITP, I keep returning to Slavoj Zizek’s critique of the philosophy of Donald Rumsfeld. Rumsfeld states that, in the leadup to the war, we were faced with known knowns (things we know we know), known unknowns (things we know we don’t know) and unknown unknowns (things we don’t even know that we don’t know), where presumably the real danger lies. Zizek points out that he misses the logical fourth state, which would be “unknown knowns.”

In Zizek’s reading, the “unknown knowns” are suppositions, tendencies, reactions and practices that are not acknowledged or scrutinized (hence “unknown”) but which are integrally part of our operations (“known” in the sense of fixed/assured and not subject to change). It creates a picture of an actor who presumes he is working from a defined set of parameters to create an ostensibly predictable outcome, while he is in fact working with some other parameters which he neither acknowledges nor understands, but which nevertheless affect the outcome and render it unpredictable, in Rumsfeld’s case catastrophically so.

I feel like this dovetails quite handily into discussions of interactivity. As a complete amateur, I see building interactivity as the process of determining how a person enters parameters to affect the outcome of an event, which is then in turn edifying, entertaining or in some sense meaningful to the person. Our job as designers is both to come up with the skeleton of an interaction (desire -> process -> result) and figure out how to allow the participant to “create” that interaction for him- or herself.

Code on a screen is terribly appealing because, in its stripped-down form, it ostensibly allows the programmer total control over what goes into an event, the processes that take place within it, and the output/result. By limiting operations to strictly defined and digitally enforced mathematical operations, it presumes to get rid of both the unknown unknowns and unknown knowns (though of course, that’s an illusion too, as anyone who has ever used buggy software can attest).

But when we begin to deal with devices that will accept input from the physical world, we encounter these other problems, a giant host of unknown unknowns and unknown knowns on every level, which are paradoxically the most difficult and the most exciting aspect of interactivity. From the environment (ambient light, temperature, background noise/vibration) to the psychology of the user (what does he expect from this machine? what does she want that tool to do? is that a natural motion for someone who wants that outcome?), devices that draw on physical cues need to be intuitive and flexible to process both things that the machine is not expecting and things that the user doesn’t realize are happening.

And they can also play with this background of unconscious operation to illuminate things that have never been shown before. How many people ten years ago realized how many steps they took in a day, or what their resting heart rate was, vs. now?

But at the same time, how does a human use this information? Ten years ago, I never had to worry over how many steps I took in a day – what does it mean to me? What should it mean to me? I was always taking steps and my heart was always beating at some non-zero rate – what good does it do me to know the figures?

As we progress into a world of intensive data surrounding every aspect of our lives, we will begin to need mental/spiritual cleaning systems that order and organize the cacophony and help us to forget things. In some sense, it’s a shift from addressing diseases of privation to diseases of plenty.

Turning to the readings, I was struck by how different the world is now from the world Crawford describes in his article – amazing to see how through-the-looking-glass we’ve gone in only a dozen years. He speaks dismissively of the unthinkable case of “Nintendo Refrigerators” and yet we now have refrigerators with screens in them, refrigerators that can become freezers at the touch of a button, ovens that refrigerate your food all day until they start baking it in the afternoon…

I am intrigued by his model of a device acting as a participant, which listens, thinks and returns a new stimulus to the other participant, as the definition of interaction. It’s a good one, though in contrasting it with a non-interactive book or painting, Crawford credits the program/process/device itself (which can listen and respond) rather than the human creator or creators of the program (who can’t), as “thinking,” which is sort of a dangerous assumption. I suppose on a deeper level of interactivity (and one that we’re coming ever-closer to) the “thinking” can itself be defined by the user/participant, though that risks becoming something like a funhouse mirror.

But I guess I’ve also been thinking about a technological interaction as exclusively being between a single human participant and a nonhuman device or series of devices, which is flawed. There are of course interactions with multiple human participants, in which the device is there to modulate the contact between them. And this is obviously the lion’s share of the interactivity we see today.

The Rant is brilliant, so brilliant that I don’t have much to say about it. I think that “things your hands know” (as opposed to “things your eyes know”) defines a huge and meaningful subset of the “unknown knowns,” and probably the ones that are the most fun and rewarding to dig out and play with. Of course, the challenge it posits is how to create physical, hand-knowable inputs that are as mutable as points of light on a screen, but that’s the challenge we need to take up. And maybe mutability isn’t the be-all and end-all of design. Maybe there is a place for old-fashioned permanence in the world of Things.