The modern performance company Pilobolus and MIT’s Distributed Robotics Laboratory teamed up at PopTech 2012 along with several hundred volunteers for a collaborative art excercise. Guided by a camera fixed on a towering crane, the volunteers moved around holding umbrellas fixed with LED lights to spontaneously create dramatic colorful formations in a darkened outdoor amphitheater.
The modern performance company Pilobolus and MIT’s Distributed Robotics Laboratory teamed up at PopTech 2012 along with several hundred volunteers for a collaborative art excercise. Guided by a camera fixed on a towering crane, the volunteers moved around holding umbrellas fixed with LED lights to spontaneously create dramatic colorful formations in a darkened outdoor amphitheater. The results are stunning.
“Although Bush set up the legal argument for autopen bill signing, he never used the device to enact legislation. Obama was the first to do so, signing an extension of the PATRIOT Act via autopen while in Europe. (Kind of fitting that a robot re-signed into law an act that represents the tenuous nature of technology, privacy, and the role of government.) Some lawmakers objected to the move, but no serious legal challenge to auto-signing bills has ever surfaced.”
That makes us think of…the human touch of a robot’s hand.
A painter who lost his passion for art after going into treatment for a mental health issue, Patrick Tresset, sought to recapture his creativity by creating a robot who could draw in his style.
“When we draw, the difficulty is not in making the lines. The difficulty is in the perception of the subject and the perception of the drawing in progress.” But sometimes, it may help to make it seem that the robot has difficulty in making the lines—Tresset has found that people feel more empathy for the machines when they make human-esque mistakes like crooked or tilted lines. (He calls this “clumsy robotics.”) Humans are inclined to want to identify with robots, especially those with faces: Give a person a bot, and he or she will probably name it. But why is that connection important in robots that draw? Tresset believes that if the person being sketched feels something for the machine wielding the pen, he or she will find the 30-minute sketching process “more touching.” Plus, if the sitter assigns a personality to the robot, it might alter the human’s emotional response to the final product.
It’s an interesting feedback loop the robot creates: mechanically induced faults and artificial humanity create empathy in the subject which translates to that genuine emotion being captured by the robot in the sketch.
Patrick Tresset (PopTech 2011) and Frederic Fol Leymarie (PopTech 2011) direct the Aikon-II project, which uses computational modeling and robotics to replicate the sketching performed by a human hand.
How do we decide whether to trust somebody?
An unusual new study of college students’ interactions with a robot has shed light on why we intuitively trust some people and distrust others. While many people assume that behaviors like avoiding eye contact and fidgeting are signals that a person is being dishonest, scientists have found that no single gesture or expression consistently predicts trustworthiness.
But researchers from Northeastern University, the Massachusetts Institute of Technology and Cornell recently identified four distinct behaviors that, together, appear to warn our brains that a person can’t be trusted.
The findings, to be published this month in the journal Psychological Science, may help explain why we are sometimes quick to like or dislike a person we have just met. More important, the research could one day be used to develop computer programs that can rapidly assess behavior in airports or elsewhere to flag security risks.
Image: Stuart Bradford
Watch now: Eben Upton, founder of the Raspberry Pi Foundation, shows how he is hooking a new generation of kids on computer programming. “I remember sitting down with my wife for dinner…and we had this sudden, appalling realization that we had promised 600,000 people that we would build them a $25 dollar computer.”
What does this mean, apart from awesome? It means, you can get a free iPhone app to follow these (up to 6m+) babies around.
Sharks in your pocket.
Way better than Polly Pocket.
For decades, academic and industry researchers have been working on control algorithms for autonomous helicopters — robotic helicopters that pilot themselves, rather than requiring remote human guidance. Dozens of research teams have competed in a series of autonomous-helicopter challenges posed by the Association for Unmanned Vehicle Systems International (AUVSI); progress has been so rapid that the last two challenges have involved indoor navigation without the use of GPS.
But MIT’s Robust Robotics Group — which fielded the team that won the last AUVSI contest — has set itself an even tougher challenge: developing autonomous-control algorithms for the indoor flight of GPS-denied airplanes. At the 2011 International Conference on Robotics and Automation (ICRA), a team of researchers from the group described an algorithm for calculating a plane’s trajectory; in 2012, at the same conference, they presented an algorithm for determining its “state” — its location, physical orientation, velocity and acceleration. Now, the MIT researchers have completed a series of flight tests in which an autonomous robotic plane running their state-estimation algorithm successfully threaded its way among pillars in the parking garage under MIT’s Stata Center.