Posts Tagged ‘Implants & Interfaces’

CLICK FOR BBC VIDEO

-Software that Learns by Watching:

Overworked and much in demand, IT support staff can’t be in two places at once. But software designed to watch and learn as they carry out common tasks could soon help–by automatically performing the same jobs across different computers.

The new software system, called KarDo, was developed by researchers at MIT. It can automatically configure an e-mail account, install a virus scanner, or set up access to a virtual private network, says MIT’s Dina Katabi, an associate professor at MIT.

-Researcher’s Robots Learn From Environment, Not Programming:

Ian Fasel, an assistant research professor, recently received two grants to fund research and design projects toward creating highly intelligent robots.

Humans have trained robots to build vehicles, fly airplanes, automatically test blood pressure in hospital patients and even play table tennis.

But robots have no concept of self, nor do they truly understand what it is they are programmed to do, said Ian Fasel, an assistant research professor of computer science at the University of Arizona.

-What Is I.B.M.’s Watson?

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.

-AI That Picks Stocks Better Than the Pros :

A computer science professor uses textual analysis of articles to beat the market.

It’s called the Arizona Financial Text system, or AZFinText, and it works by ingesting large quantities of financial news stories (in initial tests, from Yahoo Finance) along with minute-by-minute stock price data, and then using the former to figure out how to predict the latter. Then it buys, or shorts, every stock it believes will move more than 1% of its current price in the next 20 minutes – and it never holds a stock for longer.

The system was developed by Robert P. Schumaker of Iona College in New Rochelle and and Hsinchun Chen of the University of Arizona, and was first described in a paper published early this year. Both researchers continue to experiment with and enhance the system – more on that below.

-Surveillance Software Knows What a Camera Sees:

A prototype computer vision system can generate a live text description of what’s happening in a feed from a surveillance camera. Although not yet ready for commercial use, the system demonstrates how software could make it easier to skim or search through video or image collections. It was developed by researchers at the University of California, Los Angeles, in collaboration with ObjectVideo of Reston, VA.

“You can see from the existence of YouTube and all the other growing sources of video around us that being able to search video is a major problem,” says Song-Chun Zhu, lead researcher and professor of statistics and computer science at UCLA.

“Almost all search for images or video is still done using the surrounding text,” he says. Zhu and UCLA colleagues Benjamin Yao and Haifeng Gong developed a new system, called I2T (Image to Text), which is intended to change that.

-Using Neural Networks to Classify Music:

Neural networks built for image recognition are well-suited for “seeing” sound.

New work from students at the University of Hong Kong describes a novel use of neural networks, collections of artificial neurons or nodes that can be trained to accomplish a wide variety of tasks, previously used only in image recognition. The students used a convolutional network to “learn” features, such as tempo and harmony, from a database of songs that spread across 10 genres. The result was a set of trained neural networks that could correctly identify the genre of a song, which in computer science is considered a very hard problem, with greater than 87 percent accuracy. In March the group won an award for best paper at the International Multiconference of Engineers and Computer Scientists.

What made this feat possible was the depth of the student’s convolutional neural network. Conventional “kernel machine” neural networks are, as Yoshua Bengio of the University of Montreal has put it, shallow. These networks have too few layers of nodes–analogous to the layers of neurons in your cerebral cortex–to extract useful amounts of information from complex natural patterns.

In their experiments, the students, led by professor Tom Li, discovered that the optimal number of layers for musical genre recognition was three convolutional (or “thinking”) layers, with the first layer taking in the raw input data and the third layer outputting the genre data.

-Simple way to create nanocircuitry on graphene developed:

Scientists have made a breakthrough toward creating nanocircuitry on graphene, widely regarded as the most promising candidate to replace silicon as the building block of transistors. They have devised a simple and quick one-step process based on thermochemical nanolithography (TCNL) for creating nanowires, tuning the electronic properties of reduced graphene oxide on the nanoscale and thereby allowing it to switch from being an insulating material to a conducting material.
The technique works with multiple forms of graphene and is poised to become an important finding for the development of graphene electronics. The research appears in the June 11, 2010, issue of the journal Science.

-DNA logic gates herald injectable computers:

DNA-based logic gates that could carry out calculations inside the body have been constructed for the first time. The work brings the prospect of injectable biocomputers programmed to target diseases as they arise.

“The biocomputer would sense biomarkers and immediately react by releasing counter-agents for the disease,” says Itamar Willner of the Hebrew University of Jerusalem, Israel, who led the work.

The new logic gates are formed from short strands of DNA and their complementary strands, which in conjunction with some simple molecular machinery mimic their electronic equivalent. Two strands act as the input: each represents a 1 when present or a 0 when absent. The response to their presence or absence represents the output, which can also be a 1 or 0.

-Part-human, part-machine transistor devised:

Man and machine can now be linked more intimately than ever, according to a new article in the journal ACS Nano Letters. Scientists have embedded a nano-sized transistor inside a cell-like membrane and powered it using the cell’s own fuel.

The research could lead to new types of man-machine interactions where embedded devices could relay information about the inner workings of disease-related proteins inside the cell membrane, and eventually lead to new ways to read, and even influence, brain or nerve cells.<!–

“This device is as close to the seamless marriage of biological and electronic structures as anything else that people did before,” said Aleksandr Noy, a scientist at the University of California, Merced who is a co-author on the recent ACS Nano Letters. “We can take proteins, real biological machines, and make them part of a working microelectronic circuit.”

-Molecular Computations: Single Molecule Can Calculate Thousands of Times Faster Than a PC:

An experimental demonstration of a quantum calculation has shown that a single molecule can perform operations thousands of times faster than any conventional computer.

In a paper published in the May 3 issue of Physical Review Letters, researchers in Japan describe a proof-of-principle calculation they performed with an iodine molecule. The calculation involved that computation of a discrete Fourier transform, a common algorithm that’s particularly handy for analyzing certain types of signals.

Although the calculation was extraordinary swift, the methods for handling and manipulating the iodine molecule are complex and challenging. In addition, it’s not entirely clear how such computational components would have to be connected to make something resembling a conventional PC.

-Army of smartphone chips could emulate the human brain:

IF YOU have a smartphone, you probably have a slice of Steve Furber‘s brain in your pocket. By the time you read this, his 1-billion-neuron silicon brain will be in production at a microchip plant in Taiwan.

Computer engineers have long wanted to copy the compact power of biological brains. But the best mimics so far have been impractical, being simulations running on supercomputers.

Furber, a computer scientist at the University of Manchester, UK, says that if we want to use computers with even a fraction of a brain’s flexibility, we need to start with affordable, practical, low-power components.

“We’re using bog-standard, off-the-shelf processors of fairly modest performance,” he says.

Furber won’t come close to copying every property of real neurons, says Henry Markram, head of Blue Brain. This is IBM’s attempt to simulate a brain with unsurpassed accuracy on a Blue Gene supercomputer …

-Nanotechnology’s road to artificial brains:

“In a mammalian brain the computing units, neurons, are connected to each other through programmable junctions called synapses,” Wei Lu, an assistant professor in the Department of Electrical Engineering and Computer Science, explains to Nanowerk. “The synaptic weight modulates how signals are transmitted between neurons and can in turn be precisely adjusted by the ionic flow through the synapse. A memristor by definition is a resistive device with inherent memory. It is in fact very similar to a synapse – they are both two-terminal devices whose conductance can be modulated by external stimuli with the ability to store (memorize) the new information.” Reporting their findings in a recent issue of Nano Letters (“Nanoscale Memristor Device as Synapse in Neuromorphic Systems”), Lu and his group fabricated a nanoscale silicon-based memristor to mimic a synapse.

-Computers Learn to Listen, and Some Talk Back:

“Our young children and grandchildren will think it is completely natural to talk to machines that look at them and understand them,” said Eric Horvitz, a computer scientist at Microsoft’s research laboratory who led the medical avatar project, one of several intended to show how people and computers may communicate before long.

For decades, computer scientists have been pursuing artificial intelligence — the use of computers to simulate human thinking. But in recent years, rapid progress has been made in machines that can listen, speak, see, reason and learn, in their way. The prospect, according to scientists and economists, is not only that artificial intelligence will transform the way humans and machines communicate and collaborate, but will also eliminate millions of jobs, create many others and change the nature of work and daily routines.

The artificial intelligence technology that has moved furthest into the mainstream is computer understanding of what humans are saying. People increasingly talk to their cellphones to find things, instead of typing. Both Google’s and Microsoft’s search services now respond to voice commands. More drivers are asking their cars to do things like find directions or play music.

-Seeing is understanding: using artificial intelligence to analyse multimedia content:

The media produce a glut of material daily. Refining that ore into the gold of useful information requires new approaches. European researchers have now made automated multimedia analysis much smarter.

Picture a few seconds of coverage from a sporting event, say the Wimbledon finals. Your television might show a snippet of action plus the players’ names, scores, and other text scrolling across the screen, while the audio feed might feature expert commentary.

Multiply that multimedia feed by every sporting event being broadcast anywhere in the world. Then toss in all the other activities covered by the media – news, politics, pop culture, not to mention YouTube and other social media. And finally, imagine trying to make sense of this torrent of information so that it can be categorised, labelled, indexed, searched and retrieved as needed.

That’s the challenge that the EU-funded research project BOEMIE (for Bootstrapping Ontology Evolution with Multimedia Information Extraction) accepted in 2006. They’ve now shown that by using state-of-the-art artificial intelligence (AI) techniques to build and then refine highly structured knowledge bases, they can automatically or semi-automatically identify, analyse and index almost any multimedia content.

BOEMIE’s smart toolkit has significant commercial and research potential in any kind of multimedia annotation and retrieval. “Without semantic indexing, it’s very difficult to retrieve multimedia content,” says George Paliouras, BOEMIE’s technical manager. “BOEMIE offers a new approach to do this at a large scale and with high precision.”

-DARPA, Northrop taking chip speed into terahertz range:

The military could soon be taking a giant leap forward with its communications networks.

Northrop Grumman, under a contract with the Defense Advanced Research Projects Agency’s Terahertz Electronics program, has developed a new Terahertz Monolithic Integrated Circuit that more than doubles the frequency of the previously reported fastest integrated circuit.

Speaking at the recent Institute of Electrical and Electronics Engineers’ (IEEE) International Microwave Symposium in Anaheim, Calif., William Deal, THz Electronics program manager for Northrop Grumman’s Aerospace Systems sector, said that “a variety of applications exist at these frequencies. These devices could double the bandwidth, or information carrying capacity, for future military communications networks. TMIC amplifiers will enable more sensitive radar and produce sensors with highly improved resolution.”

Deal said the TMIC amplifier, developed at the company’s Simon Ramo Microelectronics Center, is the first of its kind operating at 0.67 THz, or 670 billion cycles per second.

The circuit was developed under the auspices of DARPA’s Terahertz Electronics program, whose goal is to develop device and integration technologies for electronic circuits operating at frequencies exceeding 1.0 THz. Managed by DARPA’s Microsystems Technology Office, the program focuses on terahertz high-power amplifier modules and terahertz transistor electronics.

“The success of the THz Electronics program will lead to revolutionary applications such as THz imaging systems, sub-mm-wave ultra-wideband ultra-high-capacity communication links, and sub-mm-wave single-chip widely tunable synthesizers for explosive detection spectroscopy,” said John Albrecht, THz Electronics program manager for DARPA.

Read Write Web:

When explaining the concept of augmented reality to someone who has never heard of it, I find myself going through a series of common real-life and pop-culture examples to help them understand. Aside from explaining that the “1st and Ten Line” in football games and the computer vision of the Terminator are indeed forms of augmented reality, I often use examples from the military – the fighter pilot heads-up-display, for example – as well. In fact, the military has played a significant role in the early development of AR, and one company is attempting to make sure it is a large factor in the future of the technology as well.

A Chicago-based company called Tanagram Partners is currently developing military-grade augmented reality technology that – if developed to the full potential of its prototypes – would completely change the face of military combat as we know it. Tanagram CEO Joseph Juhnke presented the technology last week at the Augmented Reality Event in Santa Clara, California, and wowed the audience with his presentation.

tgram1_jun10.jpg

Illustrations from Juhnke’s presentation tell the company’s story of how its technology could give American troops the upper-hand in hostile situations. First of all, the company is developing a system of lightweight sensors and displays that collect and provide data from and to each individual soldier in the field. This includes a computer, a 360-degree camera, UV and infrared sensors, stereoscopic cameras and OLED translucent display goggles.

With this technology – all housed within the helmet – soldiers will be able to communicate with a massive “home base” server that collects and renders 3D information onto the wearer’s goggles in real time. With the company’s “painting” technology, various objects and people will be outlined in a specific color to warn soldiers of things like friendly forces, potential danger spots, impending air-raid locations, rendez-vous points and much more.

tan2_jun10.jpg

In the above image, a spotter on a roof paints an area near his squad-mates in a red color, marking the area as a danger spot. The ability to virtually communicate the location of hostile forces to fellow soldiers is an invaluable technology to troops fighting in unfamiliar urban environments. The local fighters have a home field advantage because they are fighting in their back yards, in a way. Tanagram hopes to level the playing field – and then some – in an effort to help troops better understand their surroundings.

All of this technology can also be monitored from a central base location by military leaders. They can gather around a virtual map of the battlefield with live location data for their troops. Best of all, the system has a memory for the information put into it – which means soldiers new to an area that has been fought in before will have the benefit of knowing where previous danger spots were.

tan3_jun10.jpg

As futuristic and far-fetched as this seems, Tanagram is actually in the process of building this technology right now. The company is funded by a grant from DARPA (Defense Advanced Research Projects Agency), and plans on having a working proof-of-concept that runs on an iPhone by the first quarter of next year. Tanagram also hopes to have the server and client system operational as early as Q2 2011 as well as an open-source head-mounted display (HMD) client by the end of next year.

DANGER ROOM: