Posts Tagged ‘Artificial Intelligence’

Ignorance Is Futile:

The genius behind the Google machine is that when we use the Internet we help make it’s A.I. smarter and more powerful, and on the other hand the growing trend is to make the entire Internet “semantic“. By merely using the Internet, we’re all in effect helping to build ‘Skynet’, so something must be done to at least slow it down. So a couple years ago I had the idea of software meant to make it dumber.  Bots against bot.

The Google Machine is the ultimate in crowd sourcing. For example, we all helped Google’s AI learn how to understand spoken language. But it goes beyond just using it’s search engine. By using the Internet there is almost no escaping this. On the one hand it “indexes” any and every web page it can find, allowing it to analyze things we’ve written. On the other hand, most sites out there have either Google Search or Google Ads integrated right into the page, allowing them to track our web surfing habits even if we don’t start our travels at Google.com.

So last year I proposed software scripts designed to dumb it down, the more automatically the better. I just ran this idea by my new collaborators, TransAlchemy, it it turned out that member “SeH” not only liked the idea, he already has a working prototype program that uses “Markov Chain” algorithms to garble up text.

Not only does it randomize text, it actually solves the page indexing problem. In my idea I hadn’t thought it through enough to figure out how to hassle the page indexing, I only had the concept of a bot that spits random nonsensical search stings into the search box. His program takes manuscripts, and rearranges both the order of the sentences, and blends sentences together.

The output even contains paragraphs, and on top of everything it’s actually fun to read. Here is an example of the first 2 paragraphs of an entry from his Transperiments blog:

High Mentality that it May Be Impaired by Dawn

as the olive grove . he handled and separated parts of a small , and self – – not ; a moment on the slightest hope of his experiments with robbie nodded significantly ; consequently he was experimenting on the nitre – or the manager of the house ? he’s been hungry . it rang , because they had come from my ancestors had with full and calculating appraisal at times he hoped at the sagging floors , robert , and i had entered the neighbouring town had an air – tree , we lacked at the wrong with disastrous results , unknown malady . but , but i shiver . despite the thing quietly , for a student of dr .
gloria was rarely home – daemon – – for a long centuries had much of my friend believed it wasn’t quite overshadowed by the hillside below the body , the early acquired a dark colour . the cellar laboratory , i saw a shrill rhythm . “where have been able to overcome impatience . so carelessly sceptical , “damn it .

robbie was strangely bent every sort of brain and his lips and buried by west and comfortable . the moss – – black form the men had occurred , yet never get rid of the servants were widely ridiculed by the memory of mind , i call death . a supremely great work when the way to me , and whose ramifications and tom – stained blocks of a debauch . eloi , in a greater age , by one night the midst of his terrible groping . gloria ? ” west’s closest neighbour , who wore a fashionable thing . the unfathomable abyss of restraint – the others had occurred , to feed and disgust ; led by the oldest burying – down his selection of the st . nor indeed noticed the night of their estates , unnatural expedients in skill with good , virtues , though kalos and their reception by sheer force of yore , whom we waited until the body , and in preparation .

Imagine all of the varying different web pages, books and so on that you could morph into oddity hybrid entertainment. Religious texts, news articles and so on. Consider the different ‘specialty’ forms of English used everyday such as slang,  patent documents, “legelese” (legislative / legal documents), and more. Then you have old manuscripts written in odd tongues and dialects. To really screw with Google you could even mix different languages. Google can translate just about any language, but it would still confuse it no less.

The overall idea of this concept is to give people a fun tool they can have fun with, and then hopefully post their varying results online. With his program, in the context of what I’m trying to do with Google, the hard part is already done. The easy part should be making a smaller automaton applet that uses your web browser to do phony web searches. This should be a program that runs in the background, and uses very little CPU resources. You can even run it while you’re away from your computer, if you already leave it on all the time anyways.

The Tools:
1. Downloadable tool that users can feed different sources of manuscript into it.
2. Web page that has it built in, allowing you to select your own sources or select from a provided list.
3. An app that runs in the background and constantly feeds random search strings into Google.
4. A “Chaos” button extension built into your web browser, allowing you to auto-generate gibberish for posting into comments like on news sites.  Interactive chaos fun.

My concept currently has the bot use an Internet Explorer browser, that opens 2 tabs. One tab is set for Google News, the other is just the plain Google search page. It uses the available words from the news page, and then rearranges 13 random words into one nonsensical search string. It does the search on the plain search box, saving it from having to reopen the news page, thus using less resources. All of this done persistently and automatically. Or at least that’s one way to do it. There could be infinite ways to go about it.

OPEN SOURCE:

These sorts of ideas inherently scream open source, and SeH already has it available as open source. The more people that help out this the better, and for software programmers you can join SeH’s GIT to collaborate on work and ideas of this effort.

The software that inspired him is also open source. There’s a website called DadaDodo that is a functional first generation version of this “cut up” chaos concept. The history behind it is most interesting:

DadaDodo
Exterminate All Rational Thought

William S. Burroughs called this “cut up theory.” His approach was to take a page of text, divide it into quadrants, rearrange the quadrants, and then read the page across the divisions. He wrote this way; writing, cutting up, shuffling, publishing the result. Collage and randomness applied to words. He saw this as a way of escaping from a prison that words create for us, locking us down into one way of thinking: an idea echoed in Orwell’s1984,” where the purpose of NewSpeak was to make ThoughtCrime impossible by making it inexpressible: “The Revolution will be complete when the language is perfect.”

Ted Nelson, the inventor of hypertext, published “Computer Lib” in 1973. This book was more a stream-of-consciousness collage than anything else, nominally about nonlinear texts, and effectively an example of the same. It was written as hundreds of individual typewritten rants, and then pasted together for printing. Ironically, it was printed with a third of the pages out of order, allegedly due to a mix-up with the printer: one wonders, however, whether that really mattered.

The site is functional, but the key limitation is that you can’t specify what text sources it uses. Another drawback in terms of page index chaos is that the pages it generates aren’t being indexed, unless someone where to copy and paste it somewhere else.

SeH’s beta software overcomes that key limitation, and strives for more complex semantic structure. This initial release is really in its alpha stages, but it works, as seen above. SeH wants to make the utility more functional and easier to use, and then integrate into a web page that people can use to paste in links or other forms of text, or even have a big list of prime sources for semantic fun right at the users fingertips. Then the next step is to make smaller automaton bots for dizzying the search engines.

Stay tuned for updates and contact us if you’re able to help build on these tools and techniques.

Wall Street Journal:

Wall Street is notorious for not learning from its mistakes. Maybe machines can do better.

That is the hope of an increasing number of investors who are turning to the science of artificial intelligence to make investment decisions.

With artificial intelligence, programmers don’t just set up computers to make decisions in response to certain inputs. They attempt to enable the systems to learn from decisions, and adapt. Most investors trying the approach are using “machine learning,” a branch of artificial intelligence in which a computer program analyzes huge chunks of data and makes predictions about the future. It is used by tech companies such as Google Inc. to match Web searches with results, and NetFlix Inc. to predict which movies users are likely to rent.

One upstart in the AI race on Wall Street is Rebellion Research, a tiny New York hedge fund with about $7 million in capital that has been using a machine-learning program it developed to invest in stocks. Run by a small team of twentysomething math and computer whizzes, Rebellion has a solid track record, topping the Standard & Poor’s 500-stock index by an average of 10% a year, after fees, since its 2007 launch through June, according to people familiar with the fund. Like many hedge funds, its goal is to beat the broader market year after year.

“It’s pretty clear that human beings aren’t improving,” said Spencer Greenberg, 27 years old and the brains behind Rebellion’s AI system. “But computers and algorithms are only getting faster and more robust.”

Some sophisticated hedge funds such as Renaissance Technologies LLC, based in East Setauket, N.Y., are said to have deployed AI to invest. But for years, these firms were the exception. Some firms that have dabbled in AI are skeptical it is anywhere close to working.

Rebellion is part of a new wave of firms using machine learning to trade. Cerebellum Capital, a San Francisco hedge fund with $10 million in assets, started using machine learning to invest in 2009. A number of high-frequency trading firms, such as RGM Advisors LLC in Austin, Texas, and Getco LLC in Chicago, are using machine learning to help their computer systems trade in and out of stocks efficiently, according to people familiar with the firms.

The programs are effective, advocates say, because they can crunch huge amounts of data in short periods, “learn” what works, and adjust their strategies on the fly. In contrast, the typical quantitative approach may employ a single strategy or even a combination of strategies at once, but may not move between them or modify them based on what the program determines works best.

“No human could do this,” said Michael Kearns, a computer-science professor at the University of Pennsylvania who has used AI to invest at firms such as Lehman Brothers Holdings Inc. “Your head would blow off.”

Rebellion has struggled to raise money, in part because investors since the credit crisis are dubious of opaque math-based strategies.

Wealth accumulation among the richest North Americans (excluding Mexico) grew in 2009, with millionaires in the U.S. and Canada enjoying a 15% increase in their total worth. Collectively, these millionaires possessed $4.6 trillion, according to a report from the Boston Consulting Group.

Ignorance Is Futile:

Just one of DARPA’s many AI programs cost taxpayers $150M. Now basically considered a success, SRI who built “Siri” has now sold it to Apple for up to $250M. In effect, we’ve all paid $1 to help build a Skynet prototype. Since I don’t agree with this type of research, for many reasons, I want my tax refund.

Siri is the product of DARPA’s “PAL” program, which stands for Personal Assistant that Learns, or Perceptive Assistant that Learns, Cognitive Assistant that Learns (CALO), or even Reflective Agents with Distributed Adapted Reasoning (RADAR), depending on which DARPA person you talk to. Here’s DARPA’s “PAL” promo video:

They’ve in effect completed the program:

The software, which learns by interacting with and being advised by its users, will handle a broad range of interrelated decision-making tasks that have in the past been resistant to automation. A CALO will have the capability to engage in and lead routine tasks, and to assist when the unexpected happens. To focus the research on real problems and ensure the software meets requirements such as privacy, security, and trust, the CALO project researchers themselves are using the technology during its development.

Venture Beat just detailed article on the transaction. Along with telling people “you want the virtual assistant to be as smart as someone talking to you on the phone and doing Google searches to answer your questions”, they also mentioned some important details:

Curt Carlson, the president and chief executive of SRI International, is very excited about the Siri intelligent agent technology that his organization sold to Apple in April for a rumored $150 – $250 million. He expects it will show up in future iPhones, but he also believes that SRI’s additional technologies from virtual assistant research will become part of even more rich applications in the future.

SRI was spun out of Stanford University more than 60 years ago to commercialize research. First named Stanford Research Institute and later renamed SRI International, it became independent from Stanford University in 1970 and now has more than 2,200 researchers working on things such as the cool artificial intelligence, speech recognition, natural language processing, location information, and agent technology that is part of Siri.

Siri, spun out of SRI 18 months ago, is a virtual personal assistant technology. Its first application launched in February as a free iPhone app that lets you perform tasks such as making dinner reservations by speaking into your phone and letting a virtual assistant do the rest.

In 2003, SRI got a $150 million grant to start CALO — the Cognitive Assistant that Learns and Organizes — to develop virtual assistant technology over five years. The effort included 25 world class partners and it incubated ideas from government and commercial researchers.


One of the projects that the military is interested in is the Command Post for the Future. It is an assistant for generals in a command post that helps make decisions about how to fight wars or skirmishes.


Commercialization of ideas started four or five years ago under the lead of Norman Winarsky, vice president of ventures, licensing and strategic programs at SRI. In that process, SRI takes its ideas and finds an entrepreneur in residence to make the idea into a startup. If they can figure out a big market opportunity and a compelling business model, they launch it. That’s what happened with Siri. Upon being spun out, Siri got another $24 million from Menlo Ventures and Morgenthaler Ventures.

But Siri isn’t the end of SRI’s research. It is just one part of it. Another startup using the SRI technology is Chattertrap, a Menlo Park, Calif.-based startup that will apply the personal assistant concept to online content. The idea is to create a personal information service by figuring out what kind of news you like and delivering a personalized newspaper to you.

In 2007 WIRED Danger Room interviewed Tony Tether, then DARPA chief, and he covered both PAL and CPOF:

NS: Do you know of anything that Darpa’s working on right now that’s really game changing?

TT: Yes — our cognitive program. The cognitive program’s whole purpose in life is really to increase the tooth-to-tail ratio [military-speak for the number of combat troops to the number of support troops].

… Our cognitive program’s whole aim is to have a computer “learn you,” as opposed to you having to learn the computer. We’ve got the technology to the point where we can now apply it in Iraq to a system that we also developed called CPOF, Command Post of the Future. It is a distributed command and control system.

NS: I mean, I don’t have to tell you that people have been promising cognitive computers [since..]

TT: Well, a lot of time has passed. … Since the ’90s to now, our ability to create algorithms that can reason — can more abstractly reason — about a problem and come up with answers, and also remember what they did using Bayesian techniques and changing values, has really advanced. I mean, it tremendously advanced in the past — from the ’90s to, say, the early 2000s. At the same time, computers became more powerful. We’re on the verge of having computers with densities approaching a monkey’s brain, and it won’t be long before we’ll have a computer with the density of transistors, or equivalent to neurons and almost human. What we’re missing is the architecture. So it seemed like it was time. We had great advances in algorithms for reasoning and in algorithms that learned in general. At the same time, the computers, the actual intrinsic hardware, was really approaching the density of a human brain. And so it seemed like it was time to try again. We’ve had some great success. This cognitive program I told you about is actually showing that it is learning, and it is learning in a very difficult environment. This is the program Stanford Research runs for us.

NS: Which program is this?

TT: It’s PAL [Perceptive Assistant that Learns]. And we have other related programs. One major research issue has to do with learning. If you and I learn something, like baseball, and then we go play another sport, say golf, we somehow transfer that –?? we are able to transfer some of what we learned in baseball to golf. That’s what makes humans very resilient and flexible. We have some research programs trying to come up with the same technique –?? that if you had something tackle a problem and then gave it another problem, it would do better on that second problem than if it had not had the previous experience.

PAL & CPOF are but 2 AI and Strong AI (AGI) programs DARPA is working on. In 2006, they had two offices that dealt with AI type research. IPTO dealt with “cognitive” programs, while iXo delt with “exploitative” surveillance type programs. The iXo section of the DARPA website had an animated interactive feature, for a time. I just so happened to video capture the entire thing. About 9 months later I noticed they had removed the interactive section, and immediately began work on a new video with that interface as the framework. From there I used all quotes from DARPA and related military documents, and all animations and graphics from the same sources along with the defense contractors doing the work.

I titled it “DARPA’s iXo AI Control Grid: The Official Version”, as it technically is official, being sourced entirely from their own words and displays.

See the film here, if you haven’t already.

What sucks is by about a year later I noticed that they had merged the iXo office into the IPTO office, which kind of ‘debunked’ my title as iXo was no longer a part of the website.

So at that point the IPTO office had all of their AI programs under one banner.

Today AI programs have spilled out of the IPTO office, and are now found across most of their other 5 offices. Considering that Obama made Zachary Lemnios DoD Director of Defense Research & Engineering, which makes him the boos of all DoD sciece R&D.  Zack is a major AI pioneer, and former Deputy Director of DARPA’s IPTO office.

AI, including Strong AI,  is quite literally the overarching agenda in Obama’s Pentagon, and the $150 million spent on the Siri project is only the tip of the budget iceberg. I’m working on a massive “AGI Manhattan Project” post that details every facet, and with that I intend to finish a film project I started several years ago now.

‘Skynet’ is an easy to use popular culture reference. The thing is, what they’re trying to build, makes Skynet look like played out Sega 8-bit.

Ignorance Is Futile:

It wasn’t until I saw my employer pull out a new Android phone recently did I realized the next level in privacy matters: business trade secrets.

You’d be hard pressed to find a much stauncher critic of cute & cuddly Google than yours truly, yet somehow I hadn’t pondered this realm. Of course I started with ‘No, now Google can track your every move, and listen in on your daily routine’, and of course in response I got “if I’m not a terrorist then what do I care”. Never mind that we all have only certain things that we’ll say in front of certain people, yet now we’re supposed embrace evil governments and corporations tracking our every move and cataloging our every breath of spoken word?

The thing about these phones is they don’t merely listen in while you talk on them, they listen in virtually all the time as we’ll see below. It stands to reason that the camera is also always looking for imagery worth cataloging. These arguments, and my barrage of many others were stonewalled with the general idea that “to stay competitive in the business world I need the latest technology”. After hearing this enough times it occurred to me that Google seems to have set themselves up for the ultimate scheme in mass scale industrial espionage.

I’ll remind everyone that federal law mandates that cell phone manufacturers include built-in GPS for ‘9-1-1 locating’ (tracking) purposes. Furthermore, spook agencies such as the FBI can activate your phones microphone and listen in on anything the microphone can pick up. They can even do this while the phone is shut off, as reported by, believe it or not, Fox News:

In 2008, when Google was launching the Android, it became obvious to me that Google had set out to make people want to be tracked by GPS. Then and even still it just doesn’t occur to people that they’re being tracked by these devices. One odd Android launch applet even had you broadcasting your location to -whoever- to brag about when you’re cooking dinner. As they say, you can’t make this stuff up. Today they apparently have something on the order of 50,000 applets, and you can bet something around half of those have something to do with GPS.

Or course the federal government has had the ability to track us since at least 2005. Recently it was reported that Verizon and T-Mobile save your location information next to all of your phone call records. Meanwhile, back in February Obama’s Justice Dept. fought to keep the ‘right’ for virtually all law enforcement officers and agents to use our phones to to track and monitor us without a warrant.

So it’s already bad enough that we have this despotic government tracking us like slaves in a rat maze, even when it comes to industry trade secrets:

The most extensive claims yet came this spring in a report written for the European Parliament. The report says that the U.S.

National Security Agency, through an electronic surveillance system called Echelon, routinely tracks telephone, fax, and e-mail transmissions from around the world and passes on useful corporate intelligence to American companies.

Among the allegations: that the NSA fed information to Boeing and McDonnell Douglas enabling the companies to beat out European Airbus Industrie for a $ 6 billion contract; and that Raytheon received information that helped it win a $ 1.3 billion contract to provide radar to Brazil, edging out the French company Thomson-CSF. These claims follow previous allegations that the NSA supplied U.S. automakers with information that helped improve their competitiveness with the Japanese (see “Company Spies,” May/June 1994).
www.fas.org…

Hearing such allegations isn’t much of a surprise to me, but now we’re talking about a private corporation, Google (that is now partnered with the NSA), being on the direct receiving end of your communications and whereabouts.

As stated earlier, Google will also be listening on your entire day, not just your phone calls. This is a pretty easy allegation to make, considering that in 2006 Google went on the record stating that they’ll be listening to peoples daily routines by tapping their computer microphones, and with other ‘real world products’.

The idea appeared in Technology Review citing Peter Norvig, director of research at Google, who says these ideas will show up eventually in real Google products – sooner rather than later.

The idea is to use the existing PC microphone to listen to whatever is heard in the background, be it music, your phone going off or the TV turned down. The PC then identifies it, using fingerprinting, and then shows you relevant content, whether that’s adverts or search results, or a chat room on the subject.

And, of course, we wouldn’t put it past Google to store that information away, along with the search terms it keeps that you’ve used, and the web pages you have visited, to help it create a personalised profile that feeds you just the right kind of adverts/content. And given that it is trying to develop alternative approaches to TV advertising, it could go the extra step and help send “content relevant” advertising to your TV as well. www.theregister.co.uk…

So now we’re supposed to trust Google with our web searches, our web travels, emails, phone calls & text messages, GPS location, our daily routines and conversations, and a number of other things? This is the same Google who is directly linked with the NSA, CIA, DARPA, NASA and even the National Science Foundation. The NSF can’t be bad can they?

Take the example of “personal maps” explained by NSF bigshot Mihail Roco. In his book, “Progress in Convergence”, he supports the use of raw GPS data in tracking peoples personal daily movements. He explains:

“over the last years, estimating a person’s activities has gained increased interest in the artificial intelligence, robotics, and ubiquitous computing communities.”

He continues:

“the concept of a personal map, which is customized based on an individual’s behavior. A personal map includes personally significant places, such as home, a workplace, shopping centers, and meeting places and personally significant routes (i.e., the paths and transportation modes, such as foot, car, or bus, that the person usually uses to travel from place to place). In contrast with general maps, a personal map is customized and primarily useful for a given person. Because of the customization, it is well suited for recognizing an individual’s behavior and offering detailed personalized help.”

It goes on to highlight the use of AI powered personal maps to discriminate a targets activities, predict future movements and transportation modes and infer when the target has broken their ordinary routine. (See the full paper on this key point.)

So it sounds like the NSF has found in Google their solution to the “personal maps” ‘problem’. (Technocrats always refer to non-achieved ideas, no matter how alarming & dastardly, as “problems” that need to be ‘solved’.) This is assuming that the NSA hasn’t already been using AI “personal maps” software as described by Roco, but I posit that they likely needed Google’s advanced AI systems integrated into their own in order to effectively track and catalog everybody. Aside from the Google-NSF ‘merger’, I normally argue that it’s bad enough that the FedGov is tracking us, but at least by not using Android corporations like Google don’t have a direct link into our pockets. In any case, the NSA & Google ‘are a match made in hell‘.

Now consider Android’s Machine Vision features. One form that comes to mind is “Biowallet”, which is a biometric iris scanner applet that won in the first round of Google’s Android “developers challenge”. I hate to think of how many there must be out there who think it would be so cool to give their iris scan to Google and the federal government, using Android.

Another thrust in Google’s machine vision activities in recent years is video facial recognition and object recognition. Google’s quest is to monitor the real world in real time, like a reality TV version of Google Earth with Street View. By the way, Google acquired the technology to build Google Maps & Google Earth from a CIA venture firm project by In-Q-Tel. (www.resourceshelf.com… )

Neven Vision comes to Google with deep technology and expertise around automatically extracting information from a photo. It could be as simple as detecting whether or not a photo contains a person, or, one day, as complex as recognizing people, places, and objects. This technology just may make it a lot easier for you to organize and find the photos you care about. We don’t have any specific features to show off today, but we’re looking forward to having more to share with you soon.
www.searchenginejournal.com…

Here’s a recent example of another firms success in biometric identity acquisition:

Now many might say ‘they don’t have enough people to do all that work’, but Google’s mastery of artificial intelligence has The Machine do this work for them. Their system is so remarkable that when you use Google’s services you don’t only fund their operation, but you also help make it smarter to be even more powerful. As I argued in October 2008:

“An intelligent thinking machine would also needs ears, and ears they are giving it. Make a call to 1-800-GOOG411 and experience their speech recognition algorithms for yourself. No surprise that the service is free, because the more people use it the more you help them reach their goal of omniscience.”

As I was proven correct in November 2008:

If you own an iPhone, you can now be part of one of the most ambitious speech-recognition experiments ever launched. On Monday, Google announced that it had added voice search to its iPhone mobile application, allowing people to speak search terms into their phones and view the results on the screen.

Fortunately, Google also has a huge amount of data on how people use search, and it was able to use that to train its algorithms. If the system has trouble interpreting one word in a query, for instance, it can fall back on data about which terms are frequently grouped together.

Google also had a useful set of data correlating speech samples with written words, culled from its free directory service, Goog411. People call the service and say the name of a city and state, and then say the name of a business or category. According to Mike Cohen, a Google research scientist, voice samples from this service were the main source of acoustic data for training the system.

But the data that Google used to build the system pales in comparison to the data that it now has the chance to collect. “The nice thing about this application is that Google will collect all this speech data,” says Jim Glass, a principal research scientist at MIT. “And by getting all this data, they will improve their recognizer even more.” LINK

Which brings us back to the topic of Trade Secrets. The smarter their system becomes the better it will be able to collect and sort business trade secrets, ideas, concepts, methodologies, haggling skills, ‘dirt’, and so on. You’re literally handing them (and the government for that matter) the power to eavesdrop all of your business calls and activities, along with your ‘rolodex’ and any other little notes you store in the phone. Not only is this dangerous for your bottom line its dangerous for us all in creating a monster corporatist quazi-governmental institution wielding the power of “all of the worlds information”. This is so dangerous that we need to be alert to any others positioning themselves likewise.

SEE ALSO:

Forbes:

You’ve heard of Smart Cars. Believe us, they get smarter.

No, we’re not talking about Kit, the wryly soft-spoken roadster from the ’80s TV show Knight Rider. Fully sentient robots like Kit are still a ways off, but computer programmers are starting to make serious progress in the horrendously complex art of artificial intelligence.

Sensible Machines, headquartered in Pittsburgh, makes hardware and software that turn machines into autonomous workhorses with their own sense of positioning, safety and situational awareness. One of its utility vehicles can carry 1,200 pounds and be taught to pick up, carry and unload heavy cargo. The vehicle has a built-in safety system that uses lasers and three-dimensional models of its surroundings to navigate obstacle-strewn warehouses.

In Pictures: 11 Leaders In Artificial Intelligence

Look around and you’ll find A.I. applications popping up in a rash of industries, making once labor-intensive tasks–everything from matching hungry shoppers with targeted advertisements to discharging patients from hospitals–far faster and cheaper. As the cost of computing power continues to fall, A.I. will play an ever larger role in society’s collective decision making. Here are just a few examples …

Cheap Charters

FlyRuby, headquartered in Pittsburgh, is bent on connecting all private charter flights within one computer network to fill open seats more efficiently and slash the price of private flying by half. FlyRuby’s A.I. technology came from an Air Force research initiative that produced a set of computer algorithms that can schedule more than 5,000 missions in less than 10 seconds, depending on the vagaries of weather and war.

Private Investigation

Workstreamer, in Austin, Texas, harnessed the power of A.I. to monitor every last bit and byte of media on people and companies–from newspaper stories and blog posts to tweets, LinkedIn updates and Salesforce.com data. The software finds, sorts and scores data based on relevance. Higher scores mean the information is more important and it goes to the top of a readers’ virtual pile.

Taste-making

San Francisco’s Klout measures the influence of websites and individuals on the Internet. Its algorithms adjust to changing circumstances to analyze content and social patterns. Users can track their own Klout scores and determine what moves them up and down by what they do on the Web.

Personal Assisting

Siri is the brainchild of CALO, a $150 million artificial-intelligence project in Menlo Park, Calif., focused on building cognitive software systems that can follow orders, reason, learn, react, explain and reflect. Apple bought Siri (for an undisclosed sum) to let iPhone users make simple verbal commands–“Get a taxi to my house” or “Make a reservation at the best Italian restaurant in town”–and get results quickly. Siri interacts with existing programs on the Web and prompts users for additional input as needed, such as “Would you like to dine at Rosetti’s or The Olive Branch?”

Second Opinions

Medical imaging helps doctors diagnose everything from breast cancer to brain tumors. Now comes A.I. The University of Chicago’s Department of Radiology is testing a method, called Computer Aided Diagnosis (CAD), where a computer corroborates or challenges a radiologist’s initial diagnosis. The computer combs images for suspicious regions and lesions while estimating the probability of each spot’s malignancy. “Image interpretation by humans can be limited by incomplete visual search patterns, the potential for fatigue and distractions and the presence of structure noise in the image,” says Chicago’s Dr. Maryellen L. Giger.

Study Aid

Knewton, a Manhattan test-prepatory outfit, has built the first adaptive-learning engine that can customize a series of practice tests aimed at the user’s weaknesses. The program first tracks every question and answer, then prioritizes concepts for study each day–all without a human tutor.

Targeted Ad-Delivery Systems

A shopper walks into Macy’s. As she heads down an aisle, she is bombarded with ads on screens eerily relevant to her demographic. Chalk up this accuracy to Northeastern University Professor W. Russell Penn’s facial-recognition technology. By scanning a person’s face, the software determines gender and age; the system can also detect if your shoppers wear glasses, if they’re smiling and what kind of fashion sense and hairstyle they have–all useful information to advertisers.

Virtual Nursing

Timothy Bickmore, a computer science professor at Northeastern University, has created a virtual nurse named Louise who talks patients through the discharge process and can tell if they are correctly absorbing medical instructions. Louise is the work of Bickmore, MIT and the Boston Medical Center. Her skill in reading faces for clues is a huge breakthrough, says Dr. Brian Jack of the Boston Medical Center. Jack says that 20% of discharged patients show up at the hospital within a month because they misunderstood the directions.

Translating

At Carnegie Mellon University, Professor Alex Waibel launched a company called Jibbigo, maker of an iPhone app that provides real-time speech translation in Spanish, Japanese and Iraqi Arabic (important to the military), among others. (Translations can be done from these languages into English or the other way around.) Such highly intelligent speech-recognition technology promises to make the world a much smaller place.

(Really) Smart Cars

Pittsburgh’s Sensible Machines makes hardware and software that turn machines into autonomous workhorses with their own sense of positioning, safety and situational awareness. One of its utility vehicles can carry 1,200 pounds and be taught to pick up, carry and unload heavy cargo. The vehicle has a built-in safety system that uses lasers and three-dimensional models of its surroundings to navigate obstacle-strewn warehouses.

Ergonomic Measurement

Ford Motor Company is building artificial intelligence into its interior design program using technology developed for the Department of Defense and the University of Iowa’s Virtual Soldier Program. The software simulates motion and provides feedback on forces against the human body. Ford engineers use it to confirm that drivers can comfortably reach most controls while maintaining a proper driving position. An avatar named Santos acts as the passenger; if Santos is comfortable with the design, so are Ford’s engineers.