Menu



error This forum is not active, and new posts may not be made in it.
PromoteFacebookTwitter!
Luis Miguel Goitizolo

1162
61587 Posts
61587
Invite Me as a Friend
Top 25 Poster
Person Of The Week
RE: ARE WE NOW IN THE END TIMES?
4/6/2018 4:27:32 PM

Should AI researchers kill people?

Danny Crichton

AI research is increasingly being used by militaries around the world

AI research is increasingly being used by militaries around the world for offensive and defensive applications. This past week, groups of AI researchers began to fight back against two separate programs located halfway around the world from each other, generating tough questions about just how much engineers can affect the future uses of these technologies.

From Silicon Valley, The New York Times published an internal protest memo written by several thousand Google employees, which vociferously opposed Google’s work on a Defense Department-led initiative called Project Maven, which aims to use computer vision algorithms to analyze vast troves of image and video data.

As the department’s news service quoted Marine Corps Col. Drew Cukor last year about the initiative:

“You don’t buy AI like you buy ammunition,” he added. “There’s a deliberate workflow process and what the department has given us with its rapid acquisition authorities is an opportunity for about 36 months to explore what is governmental and [how] best to engage industry [to] advantage the taxpayer and the warfighter, who wants the best algorithms that exist to augment and complement the work he does.”

Google’s employees are demanding that the company step back from exactly that sort of partnership, writing in their memo:

Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics. The argument that other firms, like Microsoft and Amazon, are also participating doesn’t make this any less risky for Google. Google’s unique history, its motto Don’t Be Evil, and its direct reach into the lives of billions of users set it apart.

Meanwhile, in South Korea, there is growing outrage over a program to develop offensive robots jointly created by the country’s top engineering university KAIST — the Korea Advanced Institute of Science and Technology — and Korean conglomerate Hanhwa, which among other product lines is one of the largest producers of munitions for the country.Dozens of AI academics around the world have initiated a protest of the collaboration, writing that:

At a time when the United Nations is discussing how to contain the threat posed to international security by autonomous weapons, it is regrettable that a prestigious institution like KAIST looks to accelerate the arms race to develop such weapons. We therefore publicly declare that we will boycott all collaborations with any part of KAIST until such time as the President of KAIST provides assurances, which we have sought but not received, that the Center will not develop autonomous weapons lacking meaningful human control.

Here’s the thing: These so-called “killer robots” are seriously the least of our concerns. Such offensive technology is patently obvious, and researchers are free to decide whether they want to participate or not participate in such endeavors.

The wider challenge for the field is that all artificial intelligence research is equally applicable to offensive technologies as it is to improving the human condition. The entire research program around AI is to create new capabilities for computers to perceive, predict, decide and act without human intervention. For researchers, the best algorithms are idealized and generalizable, meaning that they should apply to any new subject with some tweaks and maybe more training data.

Practically, there is no way to prevent these newfound capabilities from entering offensive weapons. Even if the best researchers in the world refused to work on technologies that abetted offensive weapons, others could easily take these proven models “off the shelf” and apply them relatively straightforwardly to new applications. That’s not to say that battlefield applications don’t have their own challenges that need to be figured out, but developing core AI capabilities is the critical block in launching these sorts of applications.

AI is a particularly vexing problem of dual-use — the ability of a technology to be used for both positive applications and negative ones. A good example is nuclear theory, which can be used to massively improve human healthcare through magnetic resonance imagery and power our societies with nuclear power reactors, or it can be used in a bomb to kill hundreds of thousands.

AI is challenging because unlike, say, nuclear weapons, which require unique hardware that signals their development to other powers, AI has no such requirements. For all the talk of Tensor Processing Units, the key innovations in AI are mathematical and software in origin, before hardware performance optimization. We could build an autonomous killing drone today with a consumer-grade drone, a robotic gun trigger and computer vision algorithms downloaded from GitHub. It may not be perfect, but it would “work.” In this way, it is similar to bioweapons,which can similarly be built with standard lab equipment.

Other than outright stopping development of artificial intelligence capabilities entirely, this technology is going to get built, which means it is absolutely possible to build these weapons and launch them against adversaries.

In other words, AI researchers are going to kill people, whether they like it or not.

Given that context, the right mode for organizing isn’t to stop Google from working with the Pentagon, it is to encourage Google, which is among the most effective lobbying forces in Washington, to push for more international negotiations to ban these sorts of offensive weapons in the first place. Former Alphabet chairman Eric Schmidt chairs the Defense Innovation Board, and has a perfect perch from which to make these concerns known to the right policymakers. Such negotiations have been effective in limiting bioweapons, chemical warfare and weapons in outer space, even during the height of the Cold War. There is no reason to believe that success is out of reach.

That said, one challenge with this vision is competition from China. China has made autonomous warfare a priority, investing billions into the industry in pursuit of new tools to fight American military hegemony. Even if the U.S. and the world wanted to avoid these weapons, we may not have much of a choice. I, for one, would prefer to see the world’s largest dictatorship not acquire these weapons without any sort of countermeasure from the democratized world.

It’s important to note, though, that such fears about war and technology are hardly new. Computing power was at the heart of the “precision” bombing campaigns in Vietnam throughout the 1960s, and significant campus protests were focused on stopping newly founded computation centers from conducting their work. In many cases, classified research was banned from campus, and ROTC programs were similarly removed,only to be reinstated in recent years. The Pugwash conferences were conceived in the 1950s as a forum for scientists concerned about the global security implications of emerging technologies, namely nuclear energy.

These debates will continue, but we need to be aware that all AI developments will likely lead to better offensive weapons capabilities. Better to accept that reality today and work to protect the ethical norms of war than try to avoid it, only to discover that other adversaries have taken the AI lead — and international power with it.

Image Credits: CARL COURT/AFP / Getty Images


(
techcrunch.com)


"Choose a job you love and you will not have to work a day in your life" (Confucius)

+1
Luis Miguel Goitizolo

1162
61587 Posts
61587
Invite Me as a Friend
Top 25 Poster
Person Of The Week
RE: ARE WE NOW IN THE END TIMES?
4/6/2018 5:12:10 PM

3,100 Google employees to CEO Sundar Pichai: ‘Google should not be in the business of war’

CNBC


Beck Diefenbach | Reuters
Google CEO Sundar Pichai takes the stage during the presentation of new Google hardware in San Francisco on Oct. 4, 2016.

Google employees wrote a letter to their boss, CEO Sundar Pichai, urging that the tech giant to not be involved in creating technology that will potentially be used for warfare.

"We believe that Google should not be in the business of war," reads the letter, which was
obtained by The New York Times and published Wednesday.

The letter, which is currently being circulated on an internal communication server among Google employees and has been for "several weeks," has collected 3,100 signatures so far, according to the Times.

Employees are upset about a partnership between Google and the United States Department of Defense called Project Maven. The conflict was
first reported by Gizmodo in March. Project Maven involves Google developing artificial intelligence surveillance to help the military analyze video footage captured by U.S. government drones "to detect vehicles and other objects, track their motions, and provide results to the Department of Defense," the letter explains.

Google employees want Pichai to formally end the partnership.

"We ask that Project Maven be cancelled, and that Google draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology,"
the letter states.

Google employees are concerned that the tech giant's involvement in the development of the technology will hurt the company's reputation.

"This plan will irreparably damage Google's brand and its ability to compete for talent. Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public's trust," the letter says. "Google's unique history, its motto Don't Be Evil, and its direct reach into the lives of billions of users set it apart."


Google says it encourages its employees to speak up, and it is addressing the issue.

"An important part of our culture is having employees who are actively engaged in the work that we do," a Google spokesperson tells
CNBC Make It.

"Any military use of machine learning naturally raises valid concerns. We're actively engaged across the company in a comprehensive discussion of this important topic and also with outside experts, as we continue to develop our policies around the development and use of our machine learning technologies," Google says.

The letter does mention an internal meeting about Project Maven during which
Google Cloud CEO Diane Greene allayed some of the employees' specific fears, but they are still worried about unintended consequences.

"Recently, Googlers voiced concerns about Maven internally. Diane Greene responded, assuring them that the technology will not 'operate or fly drones' and 'will not be used to launch weapons.' While this eliminates a narrow set of direct applications, the technology is being built for the military, and once it's delivered it could easily be used to assist in these tasks," the letter states.

Despite the reference to the meeting in the letter, a Google spokesperson told The New York Times that "most" of the protest signatures were collected before the company had a chance to explain its involvement with Project Maven.

Further, Google says the work it is doing on Project Maven is going to help people, not hurt them.

"Maven is a well publicized DoD project and Google is working on one part of it — specifically scoped to be for non-offensive purposes and using open-source object recognition software available to any Google Cloud customer. The models are based on unclassified data only. The technology is used to flag images for human review and is intended to save lives and save people from having to do highly tedious work," Google says in its statement to CNBC Make it.


Still, for thousands of Google employees, any involvement with the Department of Defense is too dangerous.

"We cannot outsource the moral responsibility of our technologies to third parties. Google's stated values make this clear:
Every one of our users is trusting us. Never jeopardize that. Ever," the letter says. "This contract puts Google's reputation at risk and stands in direct opposition to our core values. Building this technology to assist the US Government in military surveillance — and potentially lethal outcomes — is not acceptable."

(cnbc.com)

"Choose a job you love and you will not have to work a day in your life" (Confucius)

+1
Luis Miguel Goitizolo

1162
61587 Posts
61587
Invite Me as a Friend
Top 25 Poster
Person Of The Week
RE: ARE WE NOW IN THE END TIMES?
4/6/2018 6:11:56 PM

WND EXCLUSIVE

SCARY, HOT NEW DRUG CALLED ‘SPIRIT MOLECULE’

Johns Hopkins studies effects of DMT, including encounters with ‘autonomous beings’

WASHINGTON – It’s one of the most popular new hallucinogenic drugs being smoked by young people around the world, with many users reporting “spiritual encounters” with entities that sound a lot like demons.

And now, Johns Hopkins University is studying the alarming effects of DMT, or Dimethyltryptamine, asking volunteers who have taken the drug about encounters they may have had with “seemingly autonomous beings or entities.”

One researcher characterized DMT as a “spirit molecule.”

So bizarre are the effects that users have described crossing into another world and seeing “machine elves,” aliens or cartoon cats. One woman reported being swarmed by apparently alive Slinky toys.

The research team is headed by Roland Griffiths, a behavioral biologist with a history of studying psychedelic substances that produce “mystical-type and near-death experiences.”

DMT is a “tryptamine” molecule that occurs naturally in many plants and animals but has been synthesized since the 1930s. Users often smoke the substance and report almost immediate hallucinogenic effects.

One regular user, Terence McKenna, had this to say about the experience: “What arrests my attention is the fact that this space is inhabited. You break into this space and are immediately swarmed by squeaking, self-transforming elf-machines … made of light and grammar and sound that come chirping and squealing and tumbling toward you.”


Read more at http://www.wnd.com/2018/04/scary-hot-new-drug-called-spirit-molecule/#GZLhVrVGtriojYZ9.99




"Choose a job you love and you will not have to work a day in your life" (Confucius)

+1
Luis Miguel Goitizolo

1162
61587 Posts
61587
Invite Me as a Friend
Top 25 Poster
Person Of The Week
RE: ARE WE NOW IN THE END TIMES?
4/6/2018 6:47:54 PM
Earthquake that rattled L.A. was most powerful in years



The magnitude 5.3 earthquake that rattled Southern California on Thursday was the strongest in the region in several years.

The magnitude-5.3 earthquake that rattled Southern California on Thursday was the strongest in the area in several years.

Though there were no immediate reports of damage, the quake was felt across a wide area and was a blunt reminder that California is earthquake country. The U.S. Geological Survey put the epicenter about 23 miles off the Channel Islands, about 85 miles west of Los Angeles.

Earthquake early-warning system gave heads-up before 5.3 magnitude temblor hit L.A. area>>

It was centered near the Eastern Santa Cruz Basin Fault Zone, Caltech seismologist Egill Hauksson said. "Earthquakes happen out there now and again. There's a major offshore fault system," he said.

Seismologist Lucy Jones said on Twitter that the fault system "moves Southern California around a bend of the San Andreas fault."

There is a slightly greater likelihood that the the temblor could trigger a larger earthquake, but that chance decreases with time, Hauksson said.

The last big earthquake in the Channel Islands region before Thursday's temblor was in 1981, which was a magnitude 6.0, Hauksson said. A magnitude 4.8 quake struck near the islands in 2013.

The last quake to be felt this widely in the L.A. area was a magnitude 4.4 in Encino in 2014. That quake also shook a wide area and was the largest in the Los Angeles area in four years. It was the strongest to hit directly under the Santa Monica Mountains in the 80 years.


Damage from the magnitude 6.8 earthquake that struck Santa Barbara in 1925. (Los Angeles Times)

The Santa Barbara area is home to a number of earthquake faults, the largest of which is the Santa Ynez fault, which is 80 miles long and runs just north of the city. That fault is believed to be capable of triggering an earthquake as powerful as 7.5.

The great Santa Barbara quake of 1925, recorded as a magnitude 6.8, destroyed a significant portion of the city's downtown area, damaged rail lines and caused extensive landslides on bluffs. It killed 13 people and was felt as far away as Orange County.

Since the Easter Sunday earthquake of 2010 that hit along the California-Mexico border, there have been 14 earthquakes of magnitude 5 or greater in Southern California. Hauksson estimated that perhaps about half of them were felt in Los Angeles.

Get ready for a major quake. What to do before — and during — a big one>>

UPDATES:

1:50 p.m.:This article was updated throughout with additional details and background.

1:25 p.m.: This article was updated with information on the magnitude 6.0 quake that struck the Channel Islands in 1981.

This article was originally posted at 12:55 p.m.


"Choose a job you love and you will not have to work a day in your life" (Confucius)

+1
Luis Miguel Goitizolo

1162
61587 Posts
61587
Invite Me as a Friend
Top 25 Poster
Person Of The Week
RE: ARE WE NOW IN THE END TIMES?
4/7/2018 12:14:36 AM

Trump threatens China with $100bn more in tariffs as response to Beijing’s ‘unfair retaliation’

Edited time: 6 Apr, 2018 05:29


US President Donald Trump speaks to reporters aboard Air Force One, April 5, 2018. © Kevin Lamarque / Reuters

Donald Trump has instructed the US Trade Representative to consider slapping China with an additional $100 billion in tariffs, accusing China of engaging in “unfair retaliation” instead of backing down to Washington’s pressure.

“Rather than remedy its misconduct, China has chosen to harm our farmers and manufacturers. In light of China's unfair retaliation, I have instructed the USTR to consider whether $100 billion of additional tariffs would be appropriate under section 301 and, if so, to identify the products upon which to impose such tariffs,” Trump’sstatement, released by the White House, said.

Earlier this week Beijing announced that it was considering a mirror response, after the United States Trade Representative (USTR) released a preliminary list of Chinese products, totalling some $50 billion, which it plans to slap with increased tariffs, under Trump’s order.

Despite Beijing’s repeated warnings that it would proportionally respond to any US moves, and its calls for negotiations to avoid an escalation of a trade war, Donald Trump on Thursday decided to adopt a harsher policy. While instructing the USTR to consider new measures, he once again cited Section 301 of the Trade Act of 1974, which had previously “determined that China has repeatedly engaged in practices to unfairly obtain America's intellectual property.”

Somewhat downplaying this new round in the trade spat, which is likely to further impact global stock markets, Trump claimed that he was still ready to have discussions with China to achieve a“free, fair, and reciprocal trade and to protect the technology and intellectual property of American companies and American people.”

“Trade barriers must be taken down to enhance economic growth in America and around the world. I am committed to enabling American companies and workers to compete on a level playing field around the world, and I will never allow unfair trade practices to undermine American interests,” the White House statement reads.

The trade dispute between Washington and Beijing sharply escalated this week, after the Trump administration on Tuesday announced 25 percent tariffs on some 1,300 industrial, technology, transport, and medical products. In response, Beijing said that it will target 106 American products, including soybeans, automobiles and chemicals. Both sets of measures have yet to come into effect.

Trump’s call for new measures against China comes a day after White House economic adviser Larry Kudlow said that he expected that the United States and China would work out their trade differences. “I believe that the Chinese will back down and will play ball,” Kudlow commented.


(RT)

"Choose a job you love and you will not have to work a day in your life" (Confucius)

+1


facebook
Like us on Facebook!