Archive for the ‘2006’ Category

Google Blogoscoped:

A company paper* Google published internally earlier this year and which I got hold of outlined some of Google’s big goals and directions for 2006. The list included several items, for example:

  • Google wants to have an improved infrastructure to make their engineers more productive. This includes allowing employees to have a universal search tool “containing all public Google information searched on all Google searches.” Google also wants to build 10MW of green power to be on track to be carbon neutral. (They also want to reduce “Borg disk waste” by 50%… hmmm, Borg?)
  • Google wants to be the best in search – no surprise here. To reach that goal, Google wants to have the world’s top AI research laboratory. They are also focusing on getting rid of spam in the top 20 user languages, and increasing the accuracy of information they collect (through measures such as annotation). Another part of improving search is to always launch crucial user interface updates “that people love.”
  • Google wants to push their ad system. E.g. in 2006, one of their aims was to sell $1B of new inventory. Google feels that if they make the world’s inventory available “marketers will come.”
  • Google also wants to push their communities and content. According to the papers Google published, Google Video has 50% of the world’s “online video attention” (a number that’s hard to believe, and especially interesting because Google still ended up buying YouTube). Google also emphasized that a fifth of all communication bandwidth – on Google-owned properties, I suppose – is read through Gmail.
  • Google tries to make sure their tools are running everywhere. In around mid-2006, according to their internal numbers 60 Million Google Packs had been installed, but they still want to increase the deployment… especially for “novice users.”
  • Google is always focusing on innovation. One of their top goals in 2006 thus was to “increase the scale of innovation,” even as the internal headcount grows (a growth which dozens of engineering scouts located around the world ensure).

One more specific objective Google outlined as company goal earlier this year in another paper** available to me was to internally test a Google News prototype during the fourth quarter. This “radically improved” prototype should allow “other news sources, and organizations and individuals mentioned in news stories to debate specific points.” I wonder what that means… anyone? I’m as puzzled about this as I am about the “Onebox @ 100% via SETI” mentioned elsewhere in the document (though unless Google is looking for extra-terrestrials, SETI is probably the code name for some internal infrastructure)… or the abbreviations “FIGSCJKR spam,” and “EFIGSCJKR” (the latter being something where Google wants to beat Yahoo).

In the meantime, Marissa Mayer was responsible to ensure that any site with over 10 million page views (per day? month?) renders in a second or less 95% of the time. Other teams saw their goals outlined by terminology such as “70% user happiness” (Gmail 2.0), “host XXM photos, up from XM” (Picasa Web), “an additional XXk machines for production indexing” (index freshness), “reduce bad landing page impressions by 20%” (ads), or “Playbacks: XXM/day” (Google Video). If Google’s “release frenzy” often appears chaotic from the outside, their internal goals do look very precise and organized… and almost every goal has a number attached to it, even when it’s a seemingly fuzzy area like user happiness.

Another interesting feature foreshadowed in the Google papers was to grab relevant locations & dates from web pages allowing users to “view results on a timeline of map.” Keep in mind the papers are older by now so this might be what has already been released as Google News Archive search last month, or the Google Trends site.

While the documents do not mention the goal of trying to decrease Google self-censorship in China, there was mention of a Chinese “Knowledge Search Beta.”

All in all, Google is trying to improve existing products and launch new ones – but not too many in order to not become disorganized, as they publicly stated, too (the document contains the simple directive “Count total number of Google products and reduce by 20%”). They also always focus on leading in search through a variety of features like index freshness & quality, as well as onebox results like Google Base or Google Co-op… and possibly, some day, paid results as well, according to one of their objectives (my emphasis – and again note the last bit may refer to Google’s already released News Archive search):

Launch Google Archive Search with XXXM docs and Google.com integration as well as [a] Paid Content results section on Google.com

[Thanks John, Tony & M. for advice, and thanks A.!]

*The document is titled “Big Goals and Directions – 2006”.

**The second document is titled “Objectives and Key Results – Q3 2006 Company OKRs”.

“I AM continually shocked and appalled at the details people voluntarily post online about themselves.” So says Jon Callas, chief security officer at PGP, a Silicon Valley-based maker of encryption software. He is far from alone in noticing that fast-growing social networking websites such as MySpace and Friendster are a snoop’s dream.

New Scientist has discovered that Pentagon’s National Security Agency, which specialises in eavesdropping and code-breaking, is funding research into the mass harvesting of the information that people post about themselves on social networks. And it could harness advances in internet technology – specifically the forthcoming “semantic web” championed by the web standards organisation W3C – to combine data from social networking websites with details such as banking, retail and property records, allowing the NSA to build extensive, all-embracing personal profiles of individuals.

Americans are still reeling from last month’s revelations that the NSA has been logging phone calls since the terrorist attacks of 11 September 2001. The Congressional Research Service, which advises the US legislature, says phone companies that surrendered call records may have acted illegally. However, the White House insists that the terrorist threat makes existing wire-tapping legislation out of date and is urging Congress not to investigate the NSA’s action.

Meanwhile, the NSA is pursuing its plans to tap the web, since phone logs have limited scope. They can only be used to build a very basic picture of someone’s contact network, a process sometimes called “connecting the dots”. Clusters of people in highly connected groups become apparent, as do people with few connections who appear to be the intermediaries between such groups. The idea is to see by how many links or “degrees” separate people from, say, a member of a blacklisted organisation.

By adding online social networking data to its phone analyses, the NSA could connect people at deeper levels, through shared activities, such as taking flying lessons. Typically, online social networking sites ask members to enter details of their immediate and extended circles of friends, whose blogs they might follow. People often list other facets of their personality including political, sexual, entertainment, media and sporting preferences too. Some go much further, and a few have lost their jobs by publicly describing drinking and drug-taking exploits. Young people have even been barred from the orthodox religious colleges that they are enrolled in for revealing online that they are gay.

“You should always assume anything you write online is stapled to your resumé. People don’t realise you get Googled just to get a job interview these days,” says Callas.

Other data the NSA could combine with social networking details includes information on purchases, where we go (available from cellphone records, which cite the base station a call came from) and what major financial transactions we make, such as buying a house.

Right now this is difficult to do because today’s web is stuffed with data in incompatible formats. Enter the semantic web, which aims to iron out these incompatibilities over the next few years via a common data structure called the Resource Description Framework (RDF). W3C hopes that one day every website will use RDF to give each type of data a unique, predefined, unambiguous tag.

“RDF turns the web into a kind of universal spreadsheet that is readable by computers as well as people,” says David de Roure at the University of Southampton in the UK, who is an adviser to W3C. “It means that you will be able to ask a website questions you couldn’t ask before, or perform calculations on the data it contains.” In a health record, for instance, a heart attack will have the same semantic tag as its more technical description, a myocardial infarction. Previously, they would have looked like separate medical conditions. Each piece of numerical data, such as the rate of inflation or the number of people killed on the roads, will also get a tag.

The advantages for scientists, for instance, could be huge: they will have unprecedented access to each other’s experimental datasets and will be able to perform their own analyses on them. Searching for products such as holidays will become easier as price and availability dates will have smart tags, allowing powerful searches across hundreds of sites.

On the downside, this ease of use will also make prying into people’s lives a breeze. No plan to mine social networks via the semantic web has been announced by the NSA, but its interest in the technology is evident in a funding footnote to a research paper delivered at the W3C’s WWW2006 conference in Edinburgh, UK, in late May.

That paper, entitled Semantic Analytics on Social Networks, by a research team led by Amit Sheth of the University of Georgia in Athens and Anupam Joshi of the University of Maryland in Baltimore reveals how data from online social networks and other databases can be combined to uncover facts about people. The footnote said the work was part-funded by an organisation called ARDA.

What is ARDA? It stands for Advanced Research Development Activity. According to a report entitled Data Mining and Homeland Security, published by the Congressional Research Service in January, ARDA’s role is to spend NSA money on research that can “solve some of the most critical problems facing the US intelligence community”. Chief among ARDA’s aims is to make sense of the massive amounts of data the NSA collects – some of its sources grow by around 4 million gigabytes a month.

The ever-growing online social networks are part of the flood of internet information that could be mined: some of the top sites like MySpace now have more than 80 million members (see Graph).

The research ARDA funded was designed to see if the semantic web could be easily used to connect people. The research team chose to address a subject close to their academic hearts: detecting conflicts of interest in scientific peer review. Friends cannot peer review each other’s research papers, nor can people who have previously co-authored work together.

So the team developed software that combined data from the RDF tags of online social network Friend of a Friend (www.foaf-project.org), where people simply outline who is in their circle of friends, and a semantically tagged commercial bibliographic database called DBLP, which lists the authors of computer science papers.

Joshi says their system found conflicts between potential reviewers and authors pitching papers for an internet conference. “It certainly made relationship finding between people much easier,” Joshi says. “It picked up softer [non-obvious] conflicts we would not have seen before.”

The technology will work in exactly the same way for intelligence and national security agencies and for financial dealings, such as detecting insider trading, the authors say. Linking “who knows who” with purchasing or bank records could highlight groups of terrorists, money launderers or blacklisted groups, says Sheth.

The NSA recently changed ARDA’s name to the Disruptive Technology Office. The DTO’s interest in online social network analysis echoes the Pentagon’s controversial post 9/11 Total Information Awareness (TIA) initiative. That programme, designed to collect, track and analyse online data trails, was suspended after a public furore over privacy in 2002. But elements of the TIA were incorporated into the Pentagon’s classified programme in the September 2003 Defense Appropriations Act.

Privacy groups worry that “automated intelligence profiling” could sully people’s reputations or even lead to miscarriages of justice – especially since the data from social networking sites may often be inaccurate, untrue or incomplete, De Roure warns.

But Tim Finin, a colleague of Joshi’s, thinks the spread of such technology is unstoppable. “Information is getting easier to merge, fuse and draw inferences from. There is money to be made and control to be gained in doing so. And I don’t see much that will stop it,” he says.

Callas thinks people have to wise up to how much information about themselves they should divulge on public websites. It may sound obvious, he says, but being discreet is a big part of maintaining privacy. Time, perhaps, to hit the delete button.

From issue 2555 of New Scientist magazine, 09 June 2006, page 30

By IgnoranceIsntBliss
Written in Dec. 2006

To what length will Gore go to save the earth? Can the man who ‘created the Internet’ take the next step and ‘create’ the artificial intelligence driven Technological Singularity in time ‘to save the earth’?

“This is by far the most serious crisis civilization has ever faced”. –Al Gore [Link]

Al Gore made a good presentation, and a great documentary. The best keynote “Powerpoint” type presentation you will find anywhere. I was impressed with some of his mentions on motivational concepts like “Boiling Frog Syndrome“, which are good concepts or counter-directives for people to apply to just about everything. I must add that BFS is actually a myth, but still a great analogy for people to understand the incrementalization that goes on against what’s best for the people and the world.

It must be pointed out that for the reason of it “trying to change your mind” it’s “propaganda“, especially because it contains certain emotional triggers. It’s not really that ‘bad’ of propaganda (assuming his data is accurate) as he uses personal life experiences as metaphors instead of exclusively using raw fear driven media and dialog. A key part in his solution was actually caring about making a difference as individuals, which is one that people need exposure to, and he was rather inspirational about it.

However, there’s a darker side to this whole Al Gore and his obsession with saving the planet issue. How far will Gore go to ensure that Earth survives under the feet of Mankind? Driven by the motivation that the point of no crossing back could be within 10 years from today, Gore has, since he “lost” the election in 2000, delivered over 1000 keynote speeches on global warming, and his film appears to be the ultimate product of his speech giving skills.

Also, in his spare time since Election 2000, he’s been serving on the Senior Board of Directors at Google & Apple Computer. That’s a good hobby place for Gore, as he’s always had a knack for science, and also just so happens to be driven by fears of a “Malthusian” (man-made) global warming ‘prophecy’. It’s fun to use the term prophecy here, just as it appeared fun to Al when he cited the Book of Revelation in his blockbuster film debut.

There’s no doubt that Gore is well known for his global warming endeavers, but what is probably even more well known is how he “created” the Internet. What’s less known is his lip in helping to coin the phrase “Information Superhighway”, which was repeatedly used in TV Land’s heart-throb movie “The Cable Guy”.

This stems from the exact same time as his well known 1986 “National Science Foundation Authorization Act”, which was, in part, for the establishment of a global warming policy. What’s typically overlooked is the 2nd part of that same Act that was for the drive for the NSF/DARPA “supercomputing” program, which was a keystone event in what became the Internet as we know it today.

Then, only 2 years later, in 1988, Gore argued to:

ESTABLISH A HIGH-CAPACITY NATIONAL RESEARCH COMPUTER NETWORK, DEVELOP AND DISTRIBUTE SOFTWARE, DEVELOP ARTIFICIAL INTELLIGENCE PROGRAMS,

Al also campaigned for U.S. President that year, and according to him, his main goal of that campaign was more about increasing awareness of global warming than it was about actually winning.

It seems that Gore’s weather hobby projects always seem to go hand in hand with his supercomputing and artificial intelligence hobby kits.

Speaking of hobby kits, Google was originally funded by the government NASA/DARPA/NSF agencies during Al’s tenure as U.S. Vice President. Not only that, but according to en ex-CIA whistleblower, the CIA was also instrumental in funding Google at this time and throughout their early money-less years.

Gore surely tapped into certain offices at the CIA for added science project fun, before “losing” the election, sort of like he did with NASA all throughout his vice presidency. It’s too bad the “Republicans” blocked the launch of his (finished) satellite, which was designed to ‘float’ perfectly in a zero-counter-gravity spot between the earth and the sun, and provide nonstop 24 hour video feed, of the best possible camera angle, for free to everyone on the Internet. Perhaps the “Big Business” Republicans didn’t want for there to be license free video of earth, but this argument is of least concern here.

In more recent times, during Gore’s tenure on the Senior Board of Directors at Google, NASA has publicly announced a partnership with Google:

to work together on a variety of areas, including large-scale data management, massively distributed computing, bio-info-nano convergence, and encouragement of the entrepreneurial space industry. The new building [1 million square foot complex] would also include labs, offices, and housing for Google engineers.“.

Before NASA and Google had announced this ‘merger’, Gore’s darling, NASA had already been well underway in their “Intelligent Archives” (IA) project. IA is literally intended to be an artificial intelligence supercomputing platform ‘to take NASA to the moon and mars’. Its predicted capabilities include handy skills such as:

“large scale data mining”, “self-awareness”, “acting on information discovered”, “extracting new information from its data holdings” (i.e. predicting, planning, solving capabilities), “coordination between intelligent archives and intelligent sensors”, “adapting to events and anticipating user needs”, “Continuously mining archived data searching for hidden relationships and patterns”, “Identifying new data sources and information collaborators, and using available resources judiciously”, “weather prediction”, “aware of its own data content and usage”, “can extract new information from data Holdings” .

To achieve this, NASA is working hand in hand with DARPA, who is responsible for the creation of the Internet (ARPANET). DARPA’s numerous AI programs started becoming public between 2002-2003. DARPA, who is responsible for things like stealth technology and now AI driven cars, functions as a sort of shadow agency. DARPA sets sci-fi like goals, titled “thrusts”, and then coordinates between any and all institutions within the United States to met those goals. This menu of institutions includes virtually any and all useful university or national labs, government agencies or departments, and even U.S. based multinational corporations.

With DARPA claiming that they’ll have their AI orgasm in 2010, and NASA claiming theirs in 2012, it’s no surprise that even Google has gone on the record stating that their goal is AI.

“The ultimate search engine would understand everything in the world. It would understand everything that you asked it and give you back the exact right thing instantly,” Mr. (Larry) Page (co-founder of Google) told an audience of the digerati representing firms from Warner Music and AOL to BSkyB and the BBC. “You could ask ‘what should I ask Larry?’ and it would tell you.”

This sings the same tune that NASA’s IA just so happens to be singing:

“Enables archive to adapt to events and anticipate user needs” [NASA]

With Gore claiming that we’ll have passed the “point of no return” to prevent cataclysmic global warming “mankinds greatest crisis” within 10 years, and his hobby project AI programs coming in just 3 years, timing couldn’t be better for Gore to ‘save the world’ with his AI ‘savior’. And what hobby kit would be complete without the worlds undisputed number 1 search engine who’s “mission is to organize (all) the world’s information”?

The following quote is from Google co-founder Sergey Brin, who in the same interview discussed earth saving new power technologies:

One of our big goals in search is to make search that really understands exactly what you want, understands everything in the world. As computer scientists, we call that artificial intelligence.” (October 26, 2005)

Gathering all of the worlds information the mission statement, and it’s rapidly becoming reality. Google’s asset & data holdings, not counting any special access they have to government systems, are astronomical. Google has complete copies of virtually every date of everything ever posted on the Internet (the Internet is said to contain over 6 billion pages), as well as search records & any other data they’ve compiled and archived through their other services like Gmail or Desktop. That’s roughly the same data that the other major search engines store.

Google ups the ante by scanning complete university libraries (7 million manuscripts at University of Michigan alone) into machine language. The Google “Search” already understands the meaning of words and their correlations. Recently, they took it to the visual dimension by acquiring Neven Vision, the biometric tech firm that possessed the worlds finest “Machine Vision” biometric face scanning technology that can literally understand the people/places/objects content of images and video. Surprisingly, this worlds most advanced biometric technology was also designed for mobile device applications, and Google has been making powerplay moves all year to secure huge contracts with phone manufacturers and service providers to include Google in their phones.

Google’s ‘imagery’ background doesn’t actually begin there, it took it’s first major step when it acquired Keyhole, which was a CIA startup through it’s In-Q-Tel privatized technology front organization. This integrated the satellite imagery into their already impressive Google Maps feature, which was so adaptable that it allows for smooth hybrid map impressions over satellite imagery.

And all of these non-search projects are said to be the result of what the “civilian” outfit known as Google holds in public display: Simple side hobby tasks from all of the engineers, for their 20 percent ‘hobby’ time away from the 80 percent of their efforts that go to “Search”.

Google and NASA are literally pooling their personnel together, and in some cases bringing people from out of the woodwork to participate in this trek to go where no man has gone before. Vint Cerf, who was a top head in the ARPANET project for DARPA, joined Google as a Vice President with the title of “Chief Internet Evangelist” in 2005. Cerf is one of the primary co-creators of the ARPANET, TCI/IP, the “Internet”, Internet2, the (still current project) Interplanetary Internet and has affiliations with the NSF’s Teragrid science network. Cerf and Gore go way back, and although Cerf wasn’t still heading the ARPANET for DARPA in the late 80’s when Gore was on the ball, Cerf still commended Gore as truly being a respectable force in the ‘creation’ of the Internet after Gore had taken heavy flak for his poor choice of words that became so famous.

They have some rather impressive ‘data holdings’ for a system with the capabilities as those specified by IA, and combined with the personnel such as Al Gore or Vint Cerf with virtually infinite resources, it’s hard to imagine any better of a hobby kit for any science enthusiast. It’s no wonder that Al presents such a positive outlook for stopping what he claims will happen within 10 years, despite the extremely high probability of him being full aware of how our society has been programmed to be self-deceivingly careless / politically biased and ignorant for some 80 years now.

It’s rather curious that transhumanists also want A.I., and the best reason they can offer to justify such, besides the obvious techno-orgasm that would result, is to ‘save the earth’. And optimistic they are, as can be seen here with Dr. Ben Goertzal, a leading AI expert who happens to work with NIST fro certain things: “Right now at least, no government in the world is trying to ban or place legal limitations on AI research” [(time) 10:20] Goertzal shows further optimism by stating: “…and where we’d like to be, which is having a thinking machine that’s smarter than us nice to us and helps us solve all the worlds problems” [(time) 20:30].

It can be made certain what Gore’s stance on “transhumanism”, but according to the Immortality Institute’s “Exploring Life Extension” film, Bill Clinton said: “Weve treated the Human Genome Project as a priority since day one because we all want to live forever”, and,We want to live forever, and were getting there.” Many Transhumanists primary goals are the rapid evolution of mankind into immortal “post human entities”, and also Artificial Intelligence in general. AI, which will rapidly become the “Technological Singularity“, is, according to Goertzal, a critical first phase to get the rest of the techno-results in the shortest time possible.

The very nature of AI is that it will eventually be able to understand its own code and self-modify itself endlessly to become something that not even its creators will be able comprehend and in a potentially short period of time. Optimists like Goertzal warn about the dangers of a “hard takeoff”, but oddly fail to mention anything about the Google/NASA/DARPA programs, and here’s why:

Google alone has 22,500,000 results for a simple search like “programming tutorials“. Anyone who’s dabbled in computer software programming knows that there are virtually endless pages about programming, and scores of free “GNU” software suites that anyone can download and start writing programs themselves. Since Google has virtually the entire Internet archived, they surely have all of this sort of data. When you give an intelligence “all of the worlds information”, including instructions on how to program in every software language known to man, it’s pretty safe to say that it’ll figure things out on its own.

Paul Joseph Watson/Prison Planet.com | October 27 2006

A former clandestine services officer for the CIA who also maintains close relationships with top Google representatives says that the company is “in bed with” the intelligence agency and the U.S. government. He has also gone public on his deep suspicions about the official explanation behind 9/11.

Robert David Steele appeared on the nationally syndicated Alex Jones radio show and began by voicing his deep doubts about the official 9/11 story.

While Steele stopped short of saying 9/11 was a complete inside job, he agreed that the evidence points to the overwhelming complicity of the Bush administration.

“The U.S. government did not properly investigate this and there are more rocks to be turned over,” said Steele adding, “I’m absolutely certain that WTC 7 was brought down by controlled demolition and that as far as I’m concerned means that this case has not been properly investigated.”

“There’s no way that building could have come down without controlled demolition.”

Steele pointed the finger of suspicion directly at the Vice President saying, “There’s no question in my own mind that Dick Cheney is the tar baby in this whole thing.”

Steele outlined the bizarre circumstances preceding the attack that would have greased the skids for bombs to be planted in the buildings.

“You do have the whole issue of the security cameras being disengaged, the bomb sniffing dogs being removed, the family ties with Bush – I mean if you smell a rotten fish there’s probably a rotten fish somewhere around.”

Steele’s biography is impressive. He was the second-ranking civilian (GS-14) in U.S. Marine Corps Intelligence from 1988-1992. Steele is a former clandestine services case officer for the Central Intelligence Agency.

He is the founder and president of Open Source Solutions, Inc., and is an acknowledged expert on computer and information vulnerabilities. Steele holds graduate degrees in International Relations and Public Administration from Leigh University and the University of Oklahoma. He has also earned certificates in Intelligence Policy from Harvard University and in Defense Studies from the Naval War College.

Before the 2004 election Steele advocated the re-election of George W Bush and he has been cited by numerous Republican luminaries as a credible source. His testimony is added to the chorus of other credible 9/11 whistleblowers both in and out of government and academia.

Steele raised eyebrows when he confirmed from his contacts within the CIA and Google that Google was working in tandem with “the agency,” a claim made especially volatile by the fact that Google was recently caught censoring Alex Jones’ Terror Storm and has targeted other websites for blackout in the past.

“I think that Google has made a very important strategic mistake in dealing with the secret elements of the U.S. government – that is a huge mistake and I’m hoping they’ll work their way out of it and basically cut that relationship off,” said the ex-CIA man.

“Google was a little hypocritical when they were refusing to honor a Department of Justice request for information because they were heavily in bed with the Central Intelligence Agency, the office of research and development,” said Steele.

Steele called for more scrutiny to be placed on Google if it continues to engage in nefarious practices, saying, “If Google is indeed starting to do harm then I think it’s important that be documented and publicized.”

Ex-Agent: CIA Seed Money Helped Launch Google
Steele goes further than before in detailing ties, names Google’s CIA liaison

Paul Joseph Watson
Prison Planet
Wednesday, December 6, 2006

An ex-CIA agent has gone further than ever before in detailing Google’s relationship with the Central Intelligence Agency, claiming sources told him that CIA seed money helped get the company off the ground and naming for the first time Google’s CIA point man.

Robert David Steele, a 20-year Marine Corps infantry and intelligence officer and a former clandestine services case officer with the Central Intelligence Agency, is the CEO of OSS.net.

Speaking to the Alex Jones Show, Steele elaborated on his previous revelations by making it known that the CIA helped bankroll Google at its very inception.

“I think Google took money from the CIA when it was poor and it was starting up and unfortunately our system right now floods money into spying and other illegal and largely unethical activities, and it doesn’t fund what I call the open source world,” said Steele, citing “trusted individuals” as his sources for the claim.

“They’ve been together for quite a while,” added Steele.

Asked to impart to what level Google is “in bed” with the CIA, Steele described the bond as a “small but significant relationship,” adding, “it is by no means dominating Google in fact Google has been embarrassed because everything the CIA asked it to do they couldn’t do.”

“I also think it’s very very wrong of Google to have this relationship,” cautioned Steele.

The former agent went further than before in identifying by name Google’s liaison at the CIA.

“Let me say very explicitly – their contact at the CIA is named Dr. Rick Steinheiser, he’s in the Office of Research and Development,” said Steele.

Steele highlighted Google’s blatant censorship policies whereby press releases put out by credible organizations that are critical of Dick Cheney and other administration members don’t make it to Google News even though they are carried by PR Newswire.

We have repeatedly highlighted past examples of censorship on behalf of Google, including their blacklisting of a mainstream news website that was mildly critical of China, and also the deliberate stifling and manipulation of Alex Jones’ Terror Storm film ranking on Google Video. Google was also caught red-handed attempting to bury the Charlie Sheen 9/11 story at the height of its notoriety.

Saying Google had become “too big for itself,” Steele opined that Google was “long overdue for a public audit.”

“One of the problems with privatized power is that it’s not subject to public audit,” said Steele, arguing that groups should rally to “put Google out of business unless they’re willing to go the open source software route.”

We regularly highlight Google’s damaging role in aiding the march towards a big brother society, but the admission that Google were planning on teaming up with the U.S. government to use microphones in the computers of an estimated 150 million-plus Internet active Americans to spy on their lifestyle choices and build psychological profiles which will be used for surveillance and minority report style invasive advertising and data mining, astounded even us.

Steele said that our previous story about Google’s ties to the CIA, which was picked up by dozens of top technology websites, concerned Google enough to lie to the public about it and deny its validity.

It remains to be seen how Google will react to these latest revelations.

Listen to the interview with Robert David Steele, in which he also questions the official version of 9/11, by clicking here.

Google actively aiding intelligence agencies?
Nate Anderson / ARS Technica | October 31 2006

Former intelligence officer Robert David Steele recently appeared on the Alex Jones show to make the provocative claim that Google is currently cooperating with secret elements in the US government, including the CIA.

Steele, who now runs OSS.net and is a proponent of open source intelligence, said that “Google has made a very important strategic mistake in dealing with the secret elements of the U.S. government—that is a huge mistake and I’m hoping they’ll work their way out of it and basically cut that relationship off.” In his view, Google’s attempt earlier this year to avoid turning over information to the Department of Justice was little more than a hypocritical charade.

Steele has made these claims for some time; back in January, he said the same things at a conference organized by his company at which several sources came forward and spoke about the alleged cooperation. According to security site HSToday.us, which had a reporter in attendance at the conference, one unnamed security contractor “said three employees of an intelligence agency he declined to identify are in Mountain View, Calif. where Google is based, working with the company to leverage the search engine company’s user data monitoring capability in the interests of national security.”

No hard evidence for these claims was presented, and those in a position to have direct knowledge of such an arrangement have been unwilling to speak about it on the record. Google traditionally prefers not to comment on such allegations, and did not respond to our requests for comment by press time.

It’s clear that the company is not opposed to working with the intelligence and defense communities in principle. Products such as Google Earth are explicitly marketed to such industries, with Google claiming that its products allow “analysts and operatives to get the job done effectively and in record time.”

Whether the famously idealistic company is actively assisting the CIA and NSA is a different question, though, and one that remains unanswered. If the allegations have any merit, then it’s no great stretch to imagine that other leading search engines have attracted the government’s interest. Will major Internet companies like Microsoft, Yahoo, and Google turn out to be as involved in surveillance as the telecommunications companies? Or did all these shadowy sources get the story wrong?

By Leslie Miller, Associated Press Writer

WASHINGTON – President Bush, again defying Congress, says he has the power to edit the Homeland Security Department’s reports about whether it obeys privacy rules while handling background checks, ID cards and watchlists.

In the law Bush signed Wednesday, Congress stated no one but the privacy officer could alter, delay or prohibit the mandatory annual report on Homeland Security department activities that affect privacy, including complaints.

But Bush, in a signing statement attached to the agency’s 2007 spending bill, said he will interpret that section “in a manner consistent with the President’s constitutional authority to supervise the unitary executive branch.”

White House spokeswoman Dana Perino said it’s appropriate for the administration to know what reports go to Congress and to review them beforehand.

“There can be a discussion on whether to accept a change or a nuance,” she said. “It could be any number of things.”

The American Bar Association and members of Congress have said Bush uses signing statements excessively as a way to expand his power.

The Senate held hearings on the issue in June. At the time, 110 statements challenged about 750 statutes passed by Congress, according to numbers combined from the White House and the Senate committee. They include documents revising or disregarding parts of legislation to ban torture of detainees and to renew the Patriot Act.

Privacy advocate Marc Rotenberg said Bush is trying to subvert lawmakers’ ability to accurately monitor activities of the executive branch of government.

“The Homeland Security Department has been setting up watch lists to determine who gets on planes, who gets government jobs, who gets employed,” said Rotenberg, executive director of the Electronic Privacy Information Center.

He said the Homeland Security Department has the most significant impact on citizens’ privacy of any agency in the federal government.

Homeland Security agencies check airline passengers’ names against terrorist watch lists and detain them if there’s a match. They make sure transportation workers’ backgrounds are investigated. They are working on several kinds of biometric ID cards that millions of people would have to carry.

The department’s privacy office has put the brakes on some initiatives, such as using insecure radio-frequency identification technology, or RFID, in travel documents. It also developed privacy policies after an uproar over the disclosure that airlines turned over their passengers’ personal information to the government.

The last privacy report was submitted in February 2005.

Bush’s signing statement Wednesday challenges several other provisions in the Homeland Security spending bill.

Bush, for example, said he’d disregard a requirement that the director of the

Federal Emergency Management Agency must have at least five years experience and “demonstrated ability in and knowledge of emergency management and homeland security.”

His rationale was that it “rules out a large portion of those persons best qualified by experience and knowledge to fill the office.”

Software Being Developed to Monitor Opinions of U.S.

ERIC LIPTON / NY Times | October 5 2006

A consortium of major universities, using Homeland Security Department money, is developing software that would let the government monitor negative opinions of the United States or its leaders in newspapers and other publications overseas.

Such a “sentiment analysis” is intended to identify potential threats to the nation, security officials said.

Researchers at institutions including Cornell, the University of Pittsburgh and the University of Utah intend to test the system on hundreds of articles published in 2001 and 2002 on topics like President Bush’s use of the term “axis of evil,” the handling of detainees at Guantánamo Bay, the debate over global warming and the coup attempt against President Hugo Chávez of Venezuela.

A $2.4 million grant will finance the research over three years.

American officials have long relied on newspapers and other news sources to track events and opinions here and abroad, a goal that has included the routine translation of articles from many foreign publications and news services.

The new software would allow much more rapid and comprehensive monitoring of the global news media, as the Homeland Security Department and, perhaps, intelligence agencies look “to identify common patterns from numerous sources of information which might be indicative of potential threats to the nation,” a statement by the department said.

It could take several years for such a monitoring system to be in place, said Joe Kielman, coordinator of the research effort. The monitoring would not extend to United States news, Mr. Kielman said.

“We want to understand the rhetoric that is being published and how intense it is, such as the difference between dislike and excoriate,” he said.

Even the basic research has raised concern among journalism advocates and privacy groups, as well as representatives of the foreign news media.

“It is just creepy and Orwellian,” said Lucy Dalglish, a lawyer and former editor who is executive director of the Reporters Committee for Freedom of the Press.

Andrei Sitov, Washington bureau chief of the Itar-Tass news agency of Russia, said he hoped that the objective did not go beyond simply identifying threats to efforts to stifle criticism about an American president or administration.

“This is what makes your country great, the open society where people can criticize their own government,” Mr. Sitov said.

The researchers, using an grant provided by a research group once affiliated with the Central Intelligence Agency, have complied a database of hundreds of articles that it is being used to train a computer to recognize, rank and interpret statements.

The software would need to be able to distinguish between statements like “this spaghetti is good” and “this spaghetti is not very good — it’s excellent,” said Claire T. Cardie, a professor of computer science at Cornell.

Professor Cardie ranked the second statement as a more intense positive opinion than the first.

The articles in the database include work from many American newspapers and news wire services, including The Miami Herald and The New York Times, as well as foreign sources like Agence France-Presse and The Dawn, a newspaper in Pakistan.

One article discusses how a rabid fox bit a grazing cow in Romania, hardly a threat to the United States. Another item, an editorial in response to Mr. Bush’s use in 2002 of “axis of evil” to describe Iraq, Iran and North Korea, said: “The U.S. is the first nation to have developed nuclear weapons. Moreover, the U.S. is the first and only nation ever to deploy such weapons.”

The approach, called natural language processing, has been under development for decades. It is widely used to summarize basic facts in a text or to create abridged versions of articles.

But interpreting and rating expressions of opinion, without making too many errors, has been much more challenging, said Professor Cardie and Janyce M. Wiebe, an associate professor of computer science at the University of Pittsburgh. Their system would include a confidence rating for each “opinion” that it evaluates and would allow an official to refer quickly to the actual text that the computer indicates contains an intense anti-American statement.

Ultimately, the government could in a semiautomated way track a statement by specific individuals abroad or track reports by particular foreign news outlets or journalists, rating comments about American policies or officials.

Marc Rotenberg, executive director of the Electronic Privacy Information Center in Washington, said the effort recalled the aborted 2002 push by a Defense Department agency to develop a tracking system called Total Information Awareness that was intended to detect terrorists by analyzing troves of information.

“That is really chilling,” Mr. Rotenberg said. “And it seems far afield from the mission of homeland security.”

Federal law prohibits the Homeland Security Department or other intelligence agencies from building such a database on American citizens, and no effort would be made to do that, a spokesman for the department, Christopher Kelly, said. But there would be no such restrictions on using foreign news media, Mr. Kelly said.

Mr. Kielman, the project coordinator, said questions on using the software were premature because the department was just now financing the basic research necessary to set up an operating system.

Professors Cardie and Wiebe said they understood that there were legitimate questions about the ultimate use of their software.

“There has to be guidelines and restrictions on the use of this kind of technology by the government,” Professor Wiebe said. “But it doesn’t mean it is not useful. It can just as easily help the government understand what is going on in places around the world.”

Bill O’Reilly wants a Draft
From Crooksandliars.com:

O’Reilly:

“The United States needs a new strategy to deal with this ominous threat. Slugging it out in Iraq may be necessary, but there might be another way. President Bush needs to level with the American people and begin putting this country on a war footing. That means a limited draft and a major commitment to defense. The President needs to shake things up and get people’s attention.”

Olbermann-Bill-tv_nbc_.jpg

As Buzzflash reminds us, Bill never decided to go off into Vietnam when he did have the chance, but what the heck he’s had a change of heart. I’d like to know what his criteria is for a limited draft?”

View CrooksandLiars Bill O’Reilly Archive

View MediaMatters.org Bill O’Reilly Archive (VAST media archive)

That’s funny, so do the “Democrat’s”, but we wont hear anything about that again until after November:

GovTrack: HR 4752: Universal National Service Act of 2006

To provide for the common defense by requiring all persons in the United States, including women, between the ages of 18 and 42 to perform a period of military service or a period of civilian service in furtherance of the national defense and homeland security, and for other purposes.