History Fragments: Don’t Ever Believe What You See

Written by on July 16, 2007 | History Fragments

Whether adding people or objects to a photo, or filling holes in an edited photo, the systems automatically find images that match the context of the original photo so they blend realistically. Unlike traditional photo editing, these results can be achieved rapidly by users with minimal skills.

“We are able to leverage the huge amounts of visual information available on the Internet to find images that make the best fit,” said Alexei A. Efros, assistant professor of computer science and robotics. “It’s not applicable for all photo editing, such as when an image of a specific object or person is added to a photo. But it’s good enough in many cases,” he added. “Why Photoshop if you can ‘photoswap’ instead””

Carnegie Mellon researchers use Web images to add realism to edited photos, 10th July 2007, Eurekanet (via Technovelgy)

Comments

There are no comments for this fragment. Why not leave a message?

Leave a comment






Please note: Comment moderation is enabled and may delay your comment. There is no need to resubmit your comment.

History Fragments
  • There are 19 fragments in the History Fragments category.
  • There are a total of 47 posts in 11 categories.

History Fragments: The World’s First Remote Air Squadron

Written by on July 16, 2007 | History Fragments

AI-guided military aircraft and vehicles have muddied the waters even further when it comes to responsibility for civilian deaths in war. Whereas before the fear was that a shift towards “video game” warfare would result in a detachment from killing, now the fear is that automated AI killing machines show no remorse on the battlefield, no human judgement, no crisis of conscience, and no ability to disobey orders that are clearly against accepted rules of warfare.

Who, then, accepts responsibility when the deaths of one hundred civilians are blamed on a glitch in the machine, or incorrect parameters? Is there a way to even tell the difference between an error, and an AI that is doing exactly what it was designed to do?

From an Amnesty International Report, 2018, The Automation of War.

At five tons gross weight, the Reaper is four times heavier than the Predator … It can fly twice as fast and twice as high as the Predator. Most significantly, it carries many more weapons. […] “It’s not a recon squadron,” Col. Joe Guasella, operations chief for the Central Command’s air component, said of the Reapers. “It’s an attack squadron, with a lot more kinetic ability.”

“Kinetic” Pentagon argot for destructive power is what the Air Force had in mind when it christened its newest robot plane with a name associated with death.

“The name Reaper captures the lethal nature of this new weapon system,” Gen. T. Michael Moseley, Air Force chief of staff, said in announcing the name last September. […] The Reaper is expected to be flown as the Predator is by a two-member team of pilot and sensor operator who work at computer control stations and video screens that display what the UAV “sees.” Teams at Balad, housed in a hangar beside the runways, perform the takeoffs and landings, and similar teams at Nevada’s Creech Air Force Base, linked to the aircraft via satellite, take over for the long hours of overflying the Iraqi landscape.

Robot Air Attack Squadron Bound for Iraq, 15th July 2007, Associated Press

History Fragments: Brain Scans, Law, and Crime

Written by on July 13, 2007 | History Fragments

Pauline Newman: But one of the more chilling uses of brain scans could be outside the courts and even before people have committed a crime. When it comes to protecting vulnerable populations like children from possible perpetrators, can we ever dare take chances? Emily Murphy.

Emily Murphy: I think that research in that area if it continues to be pushed forward might be one particular area where that’s really taken up in the public. Anybody who works with children even in a volunteer capacity in hospital are screened for any sort of strange behaviour but particularly criminal backgrounds. And the screening is very intense, it’s not necessarily a big step then to add a scan in.

Pauline Newman: But a scan would be predictive and someone might not have ever done anything wrong in their lives.

Emily Murphy: That’s entirely true and that’s going to be a very difficult issue to deal with.

Pauline Newman: But then the possibility arises of maybe changing people’s brains if we know they have tendencies.

Emily Murphy: Absolutely, there are technologies and drugs out there that are already able to do that. The forced use of those technologies is something which we haven’t yet encountered to my knowledge but is definitely on the agenda.

Pauline Newman: Horrifying as that scenario sounds, could the tools and techniques of neuroscience prevent crimes before they occur? Could it make our streets safer, and even empty the jails? Gary Marchant.

Gary Marchant: That’s one thing these kinds of technologies could leave to, maybe a more happier outcome is basically to change our criminal system from being a legal and a punishment system to more of a medical system. If we can either treat people who have committed crimes or even better, anticipate people who have these problems and treat them ahead of time to avoid these terrible tragedies – for primarily the victims but also the people who commit them – that would be a really positive result. But of course it raises all kinds of difficult issues of privacy, of free will, and free choice if we’re going to start intervening before people commit crimes. And that raises a lot of sort of science-fiction scenarios that may not be that far away.

Mind Reading: Neuroscience in the Witness Stand, All In The Mind, 23rd June 2007

History Fragments: Quote of the Day

Written by on July 3, 2007 | History Fragments

“So with the robot, you give it an instruction like: ‘Clear the building – anybody pointing a weapon at you should be killed’. Robots are infinitely brave. They have no hesitation in killing and feel no remorse. And the great thing is you don’t have to send condolence letters to their families if you put them in harm’s way.”

— John Pike, director of GlobalSecurity.org, from Robot Cop: Coming To A City Near You Soon, The Guardian, 30th June 2007

History Fragments: Swarm Theory

Written by on July 3, 2007 | History Fragments

Here’s a great look at one of the major inspirations behind today’s iBrain network. It’s sometimes hard to imagine that it was the studying of bees and other natural swarms and hives that have led to things like today’s craze of “swarming”, whereby groups of people interlink each others minds in order to process vast amounts of data. Of course, the military have been exploring this concept for some time, running tightly coordinated, self-contained military units using the same process, but it should come as no surprise to know that nature already knew best.

The bees’ rules for decision-making-seek a diversity of options, encourage a free competition among ideas, and use an effective mechanism to narrow choices-so impressed Seeley that he now uses them at Cornell as chairman of his department.

“I’ve applied what I’ve learned from the bees to run faculty meetings,” he says. To avoid going into a meeting with his mind made up, hearing only what he wants to hear, and pressuring people to conform, Seeley asks his group to identify all the possibilities, kick their ideas around for a while, then vote by secret ballot. “It’s exactly what the swarm bees do, which gives a group time to let the best ideas emerge and win. People are usually quite amenable to that.”

From Swarm Behaviour, National Geographic Magazine, July 2007: