Posts Tagged ‘Supercomputing’

IBM (snippet):

Semiconductor Breakthrough to Benefit Cloud Computing; Paves Way for Next-Gen Servers and Consumer Electronics Devices

ARMONK, NY – 17 Sep 2008: In response to ever increasing demands for smaller, more powerful and energy-efficient devices for cloud computing and high-performance servers, IBM (NYSE: IBM) today announced the semiconductor industry’s first computationally based process for production of next generation 22nm semiconductors. Known as Computational Scaling (CS) — a process that enables the production of complex, powerful and energy-efficient semiconductors at 22nms and beyond — this new initiative will feature support from several of IBM’s key partners initially including Mentor Graphics and Toppan Printing.

Today, most integrated circuits are manufactured at 45nm or larger technology nodes. Producing circuits at 22nm is a challenging milestone since current lithography methods — the process of designing photomasks to image circuit patterns on silicon wafers in mass quantity — are not adequate for critical layers at 22nm due to fundamental physical limitations. Computational Scaling overcomes these limitations by using mathematical techniques to modify the shape of the masks and characteristics of the illuminating source at each layer of an integrated circuit.

This initiative directly links to IBM’s Cloud Computing strategy, which offers highly scalable, more energy efficient Web services. Through cloud computing, enterprises and individuals can access these services in a highly flexible and open environment. As demand for these services grows, more powerful and flexible servers based upon advanced technologies will be required.

From The Official Google Blog:

9/18/2008 07:04:00 AM

The Internet has had an enormous impact on people’s lives around the world in the ten years since Google’s founding. It has changed politics, entertainment, culture, business, health care, the environment and just about every other topic you can think of. Which got us to thinking, what’s going to happen in the next ten years? How will this phenomenal technology evolve, how will we adapt, and (more importantly) how will it adapt to us? We asked ten of our top experts this very question, and during September (our 10th anniversary month) we are presenting their responses. As computer scientist Alan Kay has famously observed, the best way to predict the future is to invent it, so we will be doing our best to make good on our experts’ words every day. – Karen Wickre and Alan Eagle, series editors

In coming years, computer processing, storage, and networking capabilities will continue up the steeply exponential curve they have followed for the past few decades. By 2019, parallel-processing computer clusters will be 50 to 100 times more powerful in most respects. Computer programs, more of them web-based, will evolve to take advantage of this newfound power, and Internet usage will also grow: more people online, doing more things, using more advanced and responsive applications. By any metric, the “cloud” of computational resources and online data and content will grow very rapidly for a long time.

As we’re already seeing, people will interact with the cloud using a plethora of devices: PCs, mobile phones and PDAs, and games. But we’ll also see a rush of new devices customized to particular applications, and more environmental sensors and actuators, all sending and receiving data via the cloud. The increasing number and diversity of interactions will not only direct more information to the cloud, they will also provide valuable information on how people and systems think and react.

Thus, computer systems will have greater opportunity to learn from the collective behavior of billions of humans. They will get smarter, gleaning relationships between objects, nuances, intentions, meanings, and other deep conceptual information. Today’s Google search uses an early form of this approach, but in the future many more systems will be able to benefit from it.

What does this mean to Google? For starters, even better search. We could train our systems to discern not only the characters or place names in a YouTube video or a book, for example, but also to recognize the plot or the symbolism. The potential result would be a kind of conceptual search: “Find me a story with an exciting chase scene and a happy ending.” As systems are allowed to learn from interactions at an individual level, they can provide results customized to an individual’s situational needs: where they are located, what time of day it is, what they are doing. And translation and multi-modal systems will also be feasible, so people speaking one language can seamlessly interact with people and information in other languages.

The impact of such systems will go well beyond Google. Researchers across medical and scientific fields can access massive data sets and run analysis and pattern detection algorithms that aren’t possible today. The proposed Large Synoptic Survey Telescope (LSST), for example, may generate over 15 terabytes of new data per day! Virtually any research field will benefit from systems with the ability to gather, manipulate, and learn from datasets at that scale.

Traditionally, systems that solve complicated problems and queries have been called “intelligent”, but compared to earlier approaches in the field of ‘artificial intelligence’, the path that we foresee has important new elements. First of all, this system will operate on an enormous scale with an unprecedented computational power of millions of computers. It will be used by billions of people and learn from an aggregate of potentially trillions of meaningful interactions per day. It will be engineered iteratively, based on a feedback loop of quick changes, evaluation, and adjustments. And it will be built based on the needs of solving and improving concrete and useful tasks such as finding information, answering questions, performing spoken dialogue, translating text and speech, understanding images and videos, and other tasks as yet undefined. When combined with the creativity, knowledge, and drive inherent in people, this “intelligent cloud” will generate many surprising and significant benefits to mankind.

Computing Research Policy:

Yesterday, Yahoo, Hewlett-Packard, and Intel announced they are partnering with three universities and their governments in United States, Germany, and Singapore to build a new cloud-computing research initiative. Google and IBM launched a similar program last fall, which centered around six American universities. (Microsoft and Intel also launched a different university research partnership earlier this year.)

The new program will provide researchers with six test-bed data centers (one at each of the university and industry partners), each furnished with between 1,000 and 4,000 processors. The University of Illinois at Urbana-Champaign will represent American academia in the partnership, and will be supported in part by the NSF.

As the New York Times’ Steve Lohr points out, “This is competition at its best.”

http://www.physorg.com/news133784131.html

Can an artificial intelligence program anticipate military surprises? The USC Information Sciences Institute is playing a $7.6 million part in a DARPA research effort called Deep Green aimed at creating a system that can do so, one that might help future combat commanders in the field anticipate enemy moves.
The same system would look around and recruit additional computing resources if the situation were too dire, the problem too difficult.
The Deep Green program, a next-generation battle command and decision support technology, is the vision of Col. John Surdu, who manages the program for the Information Processing Techniques Office of DARPA.

The system interleaves anticipatory planning with adaptive execution to help the commander think ahead, identify when a plan is going awry, and prepare options, before they are needed.

Deep Green will use a human operators hand drawn sketches and words to induce intent. It will generate options for all sides in an operation and predict the likelihood of multiple futures.

By presenting decisions early and allowing the commander to “see the future,” Deep Green supports commander’s visualization and adaptive execution, enabling correct, timely decisions by the commander.

Deep Green has several components, including novel interfaces for getting guidance from and presenting options to commanders, powerful simulations of the battlespace, and methods for efficiently searching the space of future options. The prime contractor, responsible for all these elements, is SAIC.

ISI researcher Paul Cohen, heading one of the two ISI groups subcontracting on the program, notes that the name is meant to recall Deep Blue, the famous IBM chess playing program that defeated world champion Garry Kasparov in a 1997 match, a landmark in the history of artificial intelligence.

“But chess is a special, artificial situation,” Cohen notes. “The pieces occupy fixed positions for long intervals, then move instantaneously.”

A battlefield is a very different place, Cohen says. There, units on both sides are in continuous motion. Moreover, chess players can see the whole board, whereas commanders have limited visibility of the battlefield.

A program like Deep Blue visualizes where pieces might move in the future, based on the moves possible for knight, bishop, and so on. The problem for Deep Green is that time and location change continuously, so the very notion of a “state of the board” needs a new formulation.

Cohen, deputy director of ISI’s Intelligent Systems Division and director of the ISI Center for Research on Unexpected Events is working with Yu-Han Chang on a $6 million segment of the effort. The pair are creating tools by analyzing a last-man-standing free-for-all struggle called “Arena War.”

Chang and Cohen’s program, called Adversarial Continuous Time and Space Search, (ACCTS), represents collections of interacting combatants (units) by what are called “fluents,” a concept close to the time-space operators called vectors familiar to first year physics students.

Fluents represent periods in which activities of the units modeled don’t conflict or interfere with each other, or complete their mission or arrive at their goal. When they do, a decision point is reached, where new vectors have to be assigned, creating new fluents.

“Rather than relying on copious amounts of sampling to estimate future outcomes,” reads a report presented by Chang, Cohen and Wesley Kerr in November 2007, “fluents take advantage of process models that can either be solved in closed form or can be efficiently updated recursively.”

The ACTSS system aids a human commander in the Arena War by “generating, evaluating and monitoring possible futures. It identifies potential critical points in these futures, and … ranks the options for possible next actions.”

In the words of the report: “To play Arena War with the help of the ACTSS system, the commander [i.e., human operator] first inputs his plan of attack, as well as his expectations about the actions his opponents will take,” This can either be in the form of a list of specific actions “first go to point A, then to point b,” or by programming simple instructions into the pieces, such as “move away from pieces you see trying to move toward you.”

“With the plans inputted, the commander can then start the game and the ACTSS system will immediately generate updated Futures Graphs at fixed intervals.” The graphs look ahead in time, detailing how successions of fluents could develop from the fluents in play at the beginning.

And the ACTSS system uses these look-ahead graphs to see whether the commander’s forces are in danger of what would in chess be check, and does so soon enough for the commander to change activity to counter the threat. Moreover, the look-ahead is very efficient.”

Promising as the opening is, says Davis, ” the goals of Deep Green can only be met by optimizing the use of remote computational assets. It is clear that the warfighter could use more compute capability than can be carried into the battlespace.”

Enter a parallel supporting effort by Robert F. Lucas, director of ISI’s computational science division and Dan Davis aimed at finding ways to put the huge computational resources necessary to solve complicated fluents problems into a system that could actually be used in chaotic wartime conditions — “in a tent,” says Davis.

Davis and Lucas are working on a $1.6 million contract trying to create a system that links to portable electronics; a very efficient, bandwidth-saving, distributed computing platform, and an effective method for assessing local computation and communications limitations.

If it can be done – if they can create a very large trans- globally distributed computer network that still requires very little bandwidth, the Deep Green system can be made scalable — “it will run effectively on one processor to twenty processors on scene, or hundreds within the battlespace, or thousands across the globe,” explains Lucas.

“This capability means that the commander will never be without some assistance, no matter the communications situation, but can have the power of remote computers, when conditions permit,” he continued.

Davis and Lucas earlier worked on war-game models of unprecedented scale, involving millions of autonomous units moving in continent-scale environments, assembling computer resources from across the country. Key advances realized in that effort, included complementary routers (Web and Tree) that could integrate different simulation modes. This work will be integrated into the Deep Green effort.

ISI researchers acknowledge that many problems remain to be solved. “But we already can play a mean game of Arena War,” says Cohen.

Source: University of Southern California

The Big Switch“, by Nicholas Carr, gives fresh insights into the future of computing and the Internet, and in the last chapter compiles many juicy Google A.I. quotes.

Carr provides a rather dystopian view of the future in terms of how the computing industry is on the verge of accelerating the distribution of wealth from the rich and the poor via the concept of computing becoming a “utility” like electricity. The idea is that it will soon be cheaper for businesses to pay a computing utility for terminal->mainframe type access for their offices etc. This will cut out the need for businesses to need IT departments and technicians. You simply plug the office-to-go terminal into the wall and its ready to go via high speed Internet.

While I haven’t read every page in it so far, I’ve seen enough to realize the books deep insightfulness, as well as Carr’s compelling “utility” argument. In the last chapter, “iGod”, he presents many quotes from Google founders/CEO’s on the topic of AI.

First I’ll focus on the Google stuff and the books crucial shortcomings, and then there are many citations of Carr’s interviews and book reviews to better explain the books other crucial material.

My only gripe is Carr didn’t mention anything about the whole DARPA+NASA+Google partnership; nothing about DOD .mil/.gov projects. In fact, it doesn’t even have DARPA listed in the index (ARPA is there). But for everything else that was said, the book probably wouldn’t have been published if he went anywhere beyond what he did especially anywhere near the sort of .mil stuff I usually carry-on about.

Carr did miss possibly the oldest yet most shocking Google AI quote:

What would a perfect search engine look like? we asked. “It would be the mind of God.”

And Brin talking about NASA + Google:


http://www.youtube.com/IgnoranceIsntBliss

But he did provide enough Google quotes to leave little doubt about whether they mean cognitive self-aware AI, or merely ‘software performing tricks that makes it seem intelligent’ like most skeptics like to sooth say about our future.

Here are the Google AI quotes Carr did use, mostly in a more raw format:

http://www.kottke.org/plus/misc/google-playboy.html:

BRIN: The solution isn’t to limit the information you receive. Ultimately you want to have the entire world’s knowledge connected directly to your mind.

PLAYBOY: Is that what we have to look forward to?

BRIN: Well, maybe. I hope so. At least a version of that. We probably won’t be looking up everything on a computer.

PLAYBOY: How will we use Google in the future?

BRIN: Probably in many new ways. We’re already experimenting with some. You can call a phone number and say what you want to search for, and it will be pulled up. At this stage it’s obviously just a toy, but it helps us understand how to develop future products.

PLAYBOY: Is your goal to have the entire world’s knowledge connected directly to our minds?

BRIN: To get closer to that—as close as possible.

PLAYBOY: At some point doesn’t the volume become overwhelming?

BRIN: Your mind is tremendously efficient at weighing an enormous amount of information. We want to make smarter search engines that do a lot of the work for us. The smarter we can make the search engine, the better. Where will it lead? Who knows? But it’s credible to imagine a leap as great as that from hunting through library stacks to a Google session, when we leap from today’s search engines to having the entirety of the world’s information as just one of our thoughts.

http://www.pbs.org/newshour/bb/business/july-dec02/google_11-29.html:

LARRY PAGE: And, actually, the ultimate search engine, which would understand, you know, exactly what you wanted when you typed in a query, and it would give you the exact right thing back, in computer science we call that artificial intelligence. That means it would be smart, and we’re a long ways from having smart computers.

SPENCER MICHELS: Sergay Brin thinks the ultimate search engine would be something like the computer named Hal in the movie 2001: A Space Odyssey.

SERGEY BRIN: Hal could… had a lot of information, could piece it together, could rationalize it. Now, hopefully, it would never… it would never have a bug like Hal did where he killed the occupants of the space ship. But that’s what we’re striving for, and I think we’ve made it a part of the way there.

http://www.news.com/Will-search-keep-Google-on-the-throne/2100-1032_3-6070774.html:

In five years, Google will have built “the product I’ve always wanted to build–we call it ‘serendipity,'” he said, adding that it will “tell me what I should be typing.”

Also coming in the future: simultaneous translation in the major languages and the ability to take a picture on a mobile phone and use OCR (optical character recognition) to find out what it’s a picture of, he added.

“We have literally just begun on the potential of this unification,” he said.

http://www.notablebiographies.com/news/Ow-Sh/Page-Larry-and-Brin-Sergey.html:

Brin told Levy in Newsweek just before that period that he and Page were content to keep tinkering with their research-paper idea. “I think we’re pretty far along compared to 10 years ago,” he said. “At the same time, where can you go? Certainly if you had all the world’s information directly attached to your brain, or an artificial brain that was smarter than your brain, you’d be better off. Between that and today, there’s plenty of space to cover.”

http://www.edge.org/3rd_culture/dyson05/dyson05_index.html:

“We are not scanning all those books to be read by people,” explained one of my hosts after my talk. “We are scanning them to be read by an AI.”




Slide Images Source

http://jurvetson.blogspot.com/2005/01/thanks-for-memory.html:

Every time I talk about Google’s future with Larry Page, he argues that it will become an artificial intelligence.

http://www.channel4.com/news/articles/world/google%20vision/165090:


http://www.youtube.com/IgnoranceIsntBliss

The answer – artificial intelligence – with search engines so powerful they would understand “everything in the world”.

Page 212:

During a question-and-answer session after a presentation at his alma matter, Stanford University, in May 2002, Page said that Google would fulfill its mission only when its search engine was “AI-complete”. “You guys know what that means? That’s artificial intelligence”.

Page 214:

…in February 2007, (Larry Page) told a group of scientists that Google has a team of employess who are “really trying to build an artificial intelligence and do it on a large scale.” The fulfillment of their goal, he said, in “not as far off as people think.”

SOME OF CARR’S COMMENTARY:

“Everytime we write a link, or even click on one, we are feeding our intelligence into Google’s system. We are making the machine a little smarter – and Brin, Page, and all of Google’s shareholders a little richer.” p.219

“Brin and Page have programmed their machine to gather the crumbs of intelligence wthat we leave behind on the web as we go about our everyday business.” p.220

“The transfer of our intelligence into the machine will happen, in other words, whether or not we allow chips or sockets to be embedded in our skulls.” p.220

AUDIO INTERVIEW:

http://www.cbc.ca/spark/blog/2008/02/the_big_switch_interview_with_1.html

NEWS CITATIONS:

Forbes interview with total focus on Google’s stake in Carr’s arguments. Here are the AGI portions…
http://www.forbes.com/2008/01/11/google-carr-computing-tech-enter-cx_ag_0111computing.html:

Looking further ahead at Google’s intentions, you write in The Big Switch that Google’s ultimate plan is to create artificial intelligence. How does this follow from what the company’s doing today?

It’s pretty clear from what [Google co-founders] Larry Page and Sergey Brin have said in interviews that Google sees search as essentially a basic form of artificial intelligence. A year ago, Google executives said the company had achieved just 5% of its complete vision of search. That means, in order to provide the best possible results, Google’s search engine will eventually have to know what people are thinking, how to interpret language, even the way users’ brains operate.

Google has lots of experts in artificial intelligence working on these problems, largely from an academic perspective. But from a business perspective, artificial intelligence’s effects on search results or advertising would mean huge amounts of money.

You’ve also suggested that Google wants to physically integrate search with the human brain.

This may sound like science fiction, but if you take Google’s founders at their word, this is one of their ultimate goals. The idea is that you no longer have to sit down at a keyboard to locate information. It becomes automatic, a sort of machine-mind meld. Larry Page has discussed a scenario where you merely think of a question, and Google whispers the answer into your ear through your cellphone.

What would an ultra-intelligent Google of the future look like?

I think it’s pretty clear that Google believes that there will eventually be an intelligence greater than what we think of today as human intelligence. Whether that comes out of all the world’s computers networked together, or whether it comes from computers integrated with our brains, I don’t know, and I’m not sure that Google knows. But the top executives at Google say that the company’s goal is to pioneer that new form of intelligence. And the more closely that they can replicate or even expand how peoples’ mind works, the more money they make.

You don’t seem very optimistic about a future where Google is smarter than humans.

I think if Google’s users were aware of that intention, they might be less enthusiastic about the prospect than the mathematicians and computer scientists at Google seem to be. A lot of people are worried that a superior intelligence would mean for human beings.

I’m not talking about Google robots walking around and pushing humans into lines. But Google seems intent on creating a machine that’s able to do a lot of our thinking for us. When we begin to rely on a machine for memory and decision making, you have to wonder what happens to our free will.

http://www.wired.com/techbiz/people/magazine/16-01/st_qa:

Carr: It’s no coincidence that Google CEO Eric Schmidt cut his teeth there. Google is fulfilling the destiny that Sun sketched out.

Wired: But a single global system?

Carr: I used to think we’d end up with something dynamic and heterogeneous — many companies loosely joined. But we’re already seeing a great deal of consolidation by companies like Google and Microsoft. We’ll probably see some kind of oligopoly, with standards that allow the movement of data among the utilities similar to the way current moves through the electric grid.

Wired: What happened to the Web undermining institutions and empowering individuals?

Carr: Computers are technologies of liberation, but they’re also technologies of control. It’s great that everyone is empowered to write blogs, upload videos to YouTube, and promote themselves on Facebook. But as systems become more centralized — as personal data becomes more exposed and data-mining software grows in sophistication — the interests of control will gain the upper hand. If you’re looking to monitor and manipulate people, you couldn’t design a better machine.

Wired: So it’s Google über alles?

Carr: Yeah. Welcome to Google Earth. A bunch of bright computer scientists and AI experts in Silicon Valley are not only rewiring our computers — they’re dictating the future terms of our culture. It’s terrifying.

Wired: Back to the future — HAL lives!

Carr: The scariest thing about Stanley Kubrick’s vision wasn’t that computers started to act like people but that people had started to act like computers. We’re beginning to process information as if we’re nodes; it’s all about the speed of locating and reading data. We’re transferring our intelligence into the machine, and the machine is transferring its way of thinking into us.

http://www.businessweek.com/magazine/content/08_03/b4067000350895.htm:

He cites the inventor of the World Wide Web, Tim Berners-Lee, who predicted that the Web would bring “the workings of society closer to the workings of our minds.” In a similar vein, author Kevin Kelly wrote that the “gargantuan” computer provided “a new mind for an old species.” In the end, “We will live inside this thing.”

In a sense, Carr agrees. But he’s hardly sanguine about the results. He sees mankind increasingly laboring for this machine, continually feeding it with data about our every keystroke, e-mail, purchase, or movement. And in time this global computer will get smarter, learning more and more about the patterns of humanity and the world. Carr calls this process “the transfer of our intelligence into the machine,” something he finds troubling.

Indeed, Carr worries that individuals could eventally become just neurons in this global brain or, reverting to his 19th century analogy, “cogs in an intellectual machine whose workings and ends are beyond us.” Scary? No doubt. But as we prepare for the World Wide Computer, it’s not a bad idea to consider its dark side.

http://www.sfgate.com/cgi-bin/article.cgi?f=/c/a/2008/01/20/RV1OU8V4C.DTL:

Carr quotes former Wired editor and perennial hive-mind enthusiast Kevin Kelly, who proclaims: “The more we teach this megacomputer, the more it will assume responsibility for our knowing. It will become our memory. Then it will become our identity. In 2015 many people, when divorced from the Machine, won’t feel like themselves – as if they’d had a lobotomy.”

Or, as a zealot of another stripe put it: “[As] machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won’t be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.”

That quote, Carr points out, comes from Ted Kaczynski’s Unabomber Manifesto. “What was for Kaczynski a paranoia-making nightmare is for Kelly a vision of utopia,” he writes – and it’s a fact that should give us all pause as we rush headlong into the connected future.

http://online.wsj.com/article/SB119983517210276159.html?mod=2_1167_1:

We take so much of this network effect for granted that we don’t really think about it anymore. When we use a toaster, we don’t speak of “going onto the electrical grid.” Soon, Nicholas Carr argues in “The Big Switch,” we may no longer think of ourselves as “going onto the Internet.” The Web’s services will be as ubiquitous, networked and shared as electricity now is. He predicts that we’ll get into the habit of entering a “cloud” of computing, accessing services provided by Google, Facebook, Salesforce.com and innovators yet to come, no longer tethered to whatever software may be loaded onto our own computer.

Just as Edison’s business model failed, Mr. Carr argues, so will Bill Gates’s — i.e., Microsoft’s emphasis on licensing Windows and highly profitable applications like Office for each of the personal computers in a company. The “big switch,” Mr. Carr says, will put more and more such products and services on the Web. In “The End of IT” (2004), he argued controversially that, for most firms, investing substantially in information technology did not make sense, since IT could no longer deliver competitive advantage. Actually, just about every industry has a well-known leader that still extracts competitive advantage from its own information technology, usually from clever software rather than hardware. But it is true that companies increasingly depend on off-the-shelf applications rather than large-scale custom efforts.

Mr. Carr is anything but a triumphalist, however. He worries that the Web as a networked “utility” will have troubling effects. “Whereas industrialization in general and electrification in particular created many new office jobs even as they made factories more efficient,” Mr. Carr writes, “computerization is not creating a broad new class of jobs to take the place of those it destroys.”

This event is now irrelevant! Here’s the Big Story:
http://agimanhattanproject.com/

c|net:

An American military supercomputer, assembled from components originally designed for video game machines, has reached a long-sought-after computing milestone by processing more than 1.026 quadrillion calculations per second.

The new machine is more than twice as fast as the previous fastest supercomputer, the IBM BlueGene/L, which is based at Lawrence Livermore National Laboratory in California.

The new $133 million supercomputer, called Roadrunner in a reference to the state bird of New Mexico, was devised and built by engineers and scientists at IBM and Los Alamos National Laboratory, based in Los Alamos, N.M. It will be used principally to solve classified military problems to ensure that the nation’s stockpile of nuclear weapons will continue to work correctly as they age. The Roadrunner will simulate the behavior of the weapons in the first fraction of a second during an explosion.

Before it is placed in a classified environment, it will also be used to explore scientific problems like climate change. The greater speed of the Roadrunner will make it possible for scientists to test global climate models with higher accuracy.

SEE ALSO:

To put the performance of the machine in perspective, Thomas P. D’Agostino, the administrator of the National Nuclear Security Administration, said that if all 6 billion people on earth used hand calculators and performed calculations 24 hours a day and seven days a week, it would take them 46 years to do what the Roadrunner can in one day.

The machine is an unusual blend of chips used in consumer products and advanced parallel computing technologies. The lessons that computer scientists learn by making it calculate even faster are seen as essential to the future of both personal and mobile consumer computing.

The high-performance computing goal, known as a petaflop–one thousand trillion calculations per second–has long been viewed as a crucial milestone by military, technical and scientific organizations in the United States, as well as a growing group including Japan, China and the European Union. All view supercomputing technology as a symbol of national economic competitiveness.

By running programs that find a solution in hours or even less time–compared with as long as three months on older generations of computers–petaflop machines like Roadrunner have the potential to fundamentally alter science and engineering, supercomputer experts say. Researchers can ask questions and receive answers virtually interactively and can perform experiments that would previously have been impractical.

“This is equivalent to the four-minute mile of supercomputing,” said Jack Dongarra, a computer scientist at the University of Tennessee who for several decades has tracked the performance of the fastest computers.

Each new supercomputing generation has brought scientists a step closer to faithfully simulating physical reality. It has also produced software and hardware technologies that have rapidly spilled out into the rest of the computer industry for consumer and business products.

Technology is flowing in the opposite direction as well. Consumer-oriented computing began dominating research and development spending on technology shortly after the cold war ended in the late 1980s, and that trend is evident in the design of the world’s fastest computers.

The Roadrunner is based on a radical design that includes 12,960 chips that are an improved version of an IBM Cell microprocessor, a parallel processing chip originally created for Sony’s PlayStation 3 video-game machine. The Sony chips are used as accelerators, or turbochargers, for portions of calculations.

The Roadrunner also includes a smaller number of more conventional Opteron processors, made by Advanced Micro Devices, which are already widely used in corporate servers.

“Roadrunner tells us about what will happen in the next decade,” said Horst Simon, associate laboratory director for computer science at the Lawrence Berkeley National Laboratory. “Technology is coming from the consumer electronics market and the innovation is happening first in terms of cell phones and embedded electronics.”

The innovations flowing from this generation of high-speed computers will most likely result from the way computer scientists manage the complexity of the system’s hardware.

Roadrunner, which consumes roughly three megawatts of power, or about the power required by a large suburban shopping center, requires three separate programming tools because it has three types of processors. Programmers have to figure out how to keep all of the 116,640 processor cores in the machine occupied simultaneously in order for it to run effectively.

“We’ve proved some skeptics wrong,” said Michael R. Anastasio, a physicist who is director of the Los Alamos National Laboratory. “This gives us a window into a whole new way of computing. We can look at phenomena we have never seen before.”

Solving that programming problem is important because in just a few years personal computers will have microprocessor chips with dozens or even hundreds of processor cores. The industry is now hunting for new techniques for making use of the new computing power. Some experts, however, are skeptical that the most powerful supercomputers will provide useful examples.

“If Chevy wins the Daytona 500, they try to convince you the Chevy Malibu you’re driving will benefit from this,” said Steve Wallach, a supercomputer designer who is chief scientist of Convey Computer, a start-up firm based in Richardson, Tex.

Those who work with weapons might not have much to offer the video gamers of the world, he suggested.

Many executives and scientists see Roadrunner as an example of the resurgence of the United States in supercomputing.

Although American companies had dominated the field since its inception in the 1960s, in 2002 the Japanese Earth Simulator briefly claimed the title of the world’s fastest by executing more than 35 trillion mathematical calculations per second. Two years later, a supercomputer created by IBM reclaimed the speed record for the United States. The Japanese challenge, however, led Congress and the Bush administration to reinvest in high-performance computing.

“It’s a sign that we are maintaining our position,” said Peter J. Ungaro, chief executive of Cray, a maker of supercomputers. He noted, however, that “the real competitiveness is based on the discoveries that are based on the machines.”

Having surpassed the petaflop barrier, IBM is already looking toward the next generation of supercomputing. “You do these record-setting things because you know that in the end we will push on to the next generation and the one who is there first will be the leader,” said Nicholas M. Donofrio, an IBM executive vice president.

By breaking the petaflop barrier sooner than had been generally expected, the United States’ supercomputer industry has been able to sustain a pace of continuous performance increases, improving a thousandfold in processing power in 11 years. The next thousandfold goal is the exaflop, which is a quintillion calculations per second, followed by the zettaflop, the yottaflop and the xeraflop.

More:

IBM Roadrunner – Wikipedia, the free encyclopedia