Artificial Intelligence: Real, Surreal and Scary

AI.1

You awake one morning to find your brain has another lobe functioning. Invisible, this auxiliary lobe answers your questions with information beyond the realm of your own memory, suggests plausible courses of action, and asks questions that help bring out relevant facts. You quickly come to rely on the new lobe so much that you stop wondering how it works. You just use it. This is the dream of artificial intelligence. — BYTE, April 1985

Artificial intelligence (AI) is based on the premise that human intelligence “…can be so precisely described that a machine can be made to simulate it[i].” Artificial intelligence is not so artificial.  AI is being actively deployed by the private and public sectors in our modern day world.  This begets questions about the ethos of AI applications and the lack of national and international regulations protecting against abuses.

Historical Context

By the middle of the 1960s, AI research in the United States (U.S.) was already heavily funded by the Department of Defense (DoD).  By 1985, the AI market exceeded a billion dollars.  In the 1990s and early 21st century, public sector AI successes took place behind the scenes while private sector development vacillated.  That has changed[ii].  Today, according to Eric Horvitz “”over a quarter of all attention and resources” at Microsoft Research alone are focused on artificial intelligence[iii].”

With nanotechnology advances, AI is all but self-replicating:

Just a few years ago, artificial intelligence was a field starved for funding, rife with skepticism, and distinguished not by its achievements but by its perennial disappointments. Now machines have the capability to learn, build things, answer questions, and yes, even harm people [iv].

Sound Byte

Today, AI is used in logistics, data mining, medical diagnosis, military operations, and in law enforcement surveillance operations.

Perhaps the best known popular application of AI was when IBM’s supercomputer Watson won on Jeopardy! [v].

The next best example is the text-to-speech program used by physicist and Professor Stephen Hawking, who suffers from Amyotrophic Lateral Sclerosis.  The updated program upon which he relies was a corroborative effort between Intel and SwiftKey.  Their combined technology is preprogrammed to learn “how the professor thinks and suggests the words he might want to use next[vi].”  This, in turn, subliminally suggests actions, with the program learning from each prior user action.

This is all based upon triggering specific neural networks. Microsoft Chief Research Scientist Christopher Bishop stated, “Recreating the cognitive capabilities of the brain in an artificial system is a tantalizing challenge, and a successful solution will represent one of the most profound inventions of all time[vii].”

These features distinguish AI from the generic predictive text programs that are commonly uploaded on to smartphones[viii].  That being said, user specific AI programs are being used in smartphones.  They can be covertly installed and even monitored by drones[ix].

This writer has firsthand knowledge of this AI application with the research for this paper having been completed on AI program installed on her IPhone 6.

Research and Development

Most AI research is focused on developing technologies to benefit society. Areas of focus include making battlefields safer, preventing accidents and reducing medical errors[x].

AI is a vast multi-disciplinary area encompassing all of the computer sciences, mathematics, traditional and artificial psychology[xi], linguistics, data analytics, the neurosciences, anthropology, history, and even philosophy.

The ontology or ‘fund of knowledge’ required to implement AI includes an extensive and malleable knowledge about the world.  AI programs need to apply this knowledge bank to objects, properties, categories and relationships between objects, events, states and time; cause and effect; specific knowledge about the end user; and many other domains.

This wealth of data is employed in developing specific AI algorithms:

[These algorithms are] designed to make high-stakes decisions in real time.

The real innovation is that these algorithms emulate the human brain, amplifying its capabilities through the instantaneous collaboration of a network of intelligent systems that could be able to learn from their experience [xii].

The greatest challenge is the quest to achieving the point where science can or, for that matter, will confirm that AI reasoning mimics general intelligence. This raises ethical issues as to who or what is in control.

Once deployed, AI is supposed to be closely monitored and controlled by a set of statistical checks and balances, the most notable of which are:

Validity: ensure that the AI system maintains a normal behavior that does not contradict the requirements defined in the design phase.

Control: how to enable human control over an AI system after it begins to operate, for example to change requirements.

Reliability: The reliability of predictions made by AI systems[xiii].

There can be both passive and active human monitoring.  This applies to the smartphone example above.

Microsoft Research’s Chief Eric Horvitz stated he believed that “…intelligent machines could achieve consciousness[xiv].”

“The question then becomes whether two intelligences can co-exist. If our past and present history is any indication…the future doesn’t bode well[xv],” rued physics professor Marcelo Gleiser.

Critics and Supporters

A team of Oxford University researchers recently issued a scathing report:

Artificial Intelligence (AI) seems to be possessing huge potential to deliberately work towards extinction of the human race. Though, synthetic biology and nanotechnology along with AI could be possibly be an answer to many existing problems however if used in wrong way it could probably be the worst tool against humanity [xvi].

Last year, following upgrades to his own AI system, Professor Hawking gave an interview to BBC discussing the pros and cons of AI.  Professor Hawking explained that success in AI development would be the “biggest event in human history” and, further, that human beings could not underestimate the risks[xvii].

“The development of full artificial intelligence could spell the end of the human race” as it “It would take off on its own, and re-design itself at an ever increasing rate,” predicted Professor Hawking.

This same concern has been articulated over and over again by other industry experts.  Consider, for instance, the foreboding words of Elon Musk, the founder of Tesla Motors, SpaceX and Solar City, spoken at the 2014 MIT Aeronautics and Astronautics Department’s Centennial Symposium:

I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out [xviii].

Physicist Louis Del Monte agreed, stating that, “The concern I’m raising is that the machines will view us as an unpredictable and dangerous species[xix].”

British inventor Clive Sinclair has opined that “artificial intelligence will doom mankind,” as “Once you start to make machines that are rivaling and surpassing humans with intelligence, it’s going to be very difficult for us to survive. It’s just an inevitability[xx].”

Microsoft Co-Founder Bill Gates expressed his concern:

I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned [xxi].

There is cause for concern.  The DoD’s Defence Advanced Research Projects Agency (DARPA) is now funding several AI projects which “…could potentially equip governments with the most powerful weapon possible: mind control(Emphasis Added)[xxii].”

Mr. Horvitz disagreed, stating, “There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don’t think that’s going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we’ll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life[xxiii].”

The Pentagon has refused to publicly comment.

Fielding AI

Anticipated problems stem from the thrust of current research, as well as the scant information government sources have released to the general public.  The problems are two-fold:  civilian use that is harmful or used in a manner violating human rights, and use of AI as an autonomous military weapon.

The primary problem with civilian use was aptly described by security expert, Barnaby Jack, of IOActive:

[A] vulnerability of biotechnological systems, which raises concerns that BCI technologies may also potentially be vulnerable and expose an individual’s´ brain to hacking, manipulation and control by third parties. If the brain can control computer systems and computer systems are able to detect and distinguish brain patterns, then this ultimately means that the human brain can potentially be controlled by computer software (Emphasis Added) [xxiv] .

An example was given in a recent media analysis:

Imagine surveillance technologies with the capacity of a human brain. Imagine surveillance technologies capable of remembering your activity, analyzing it, correlating it to other facts and/or activities, and of predicting outcomes; and now imagine such technology used to spy on us[xxv].

That example was not unfounded as is reflected in Oxford University’s recent study:

Intelligent systems are able to “perceive” the surrounding environment and act to maximize their chances of success. For this reason the “extreme intelligences … are difficult to control and would probably act to boost their own intelligence and acquire maximal resources for almost all initial Artificial Intelligence motivations [xxvi].”

An example of unbridled civilian usage is a DoD DARPA project involving the coordinated use of CCTV traffic cameras for surveillance purposes.  AI is designed to automatically, if not instantaneously, process the video feed extrapolating not only license plate identification but, the driver’s identity vis-à-vis facial recognition software.

This DARPA AI program can further distinguish between ‘normal’ and ‘abnormal’ behavior.  The problem is that abnormal and normal behavior is a value judgment contingent upon a host of uncontrollable variables.  Essentially, we are entrusting a computer to hunt people down autonomously.

If the DoD has taken this program abroad, then the number of cameras involved numbers in the millions[xxvii].

The potentiality for error and, ergo, civil rights abuses is vast.

As Prof. Selman of Cornell University stated, “That’s a bit scary[xxviii].”

The United States, followed by China, leads in developing AI for military purposes[xxix].  The most pressing concern is the development of autonomous weapons[xxx]. The DoD has long been interested in AI believing it the path to reducing operating and human costs[xxxi].

One DoD application is early mental health intervention, especially for combat-induced Post-Traumatic Stress Disorder[xxxii].  Research has established that individuals are more likely to be candid about their emotional well-being when ‘being analyzed’ by a computer[xxxiii].  As to long-term outcomes, the Director of National Intelligence, under which all of these agencies discussed herein fall, has not released any statistics.

The Pentagon’s website indicates that “artificial-intelligence projects are being pursued to provide the U.S. military with “increasingly intelligent assistance[xxxiv].”

The Need for Regulation

There are two areas mandating national and international governance:  use in civilian surveillance and as a military armament. Professor Hawking and Mr. Del Monte are amongst those calling for legislative controls, if not foreign treatises, governing AI usage.

Mr. Del Monte opined:

Today there’s no legislation regarding how much intelligence a machine can have, how interconnected it can be. If that continues, look at the exponential trend. We will reach the singularity in the timeframe most experts predict. From that point on you’re going to see that the top species will no longer be humans, but machines.

“For the sake of humanity, a letter was published Monday [July 28, 2015] by Stephen Hawking, Elon Musk, more than 7,000 tech watchers and luminaries, and 1,000 artificial intelligence researchers; it urged the world’s militaries to stop pursuing ever-more-autonomous robotic weapons[xxxv]”.

If any major military power pushes ahead with (artificial intelligence) weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: Autonomous weapons will become the Kalashnikovs of tomorrow [the Russian assault rifle in use around the world].  Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce[xxxvi].

The letter was signed by well-respected experts in the field including:

  • Professor Stephen Hawking;
  • Oren Etzioni, chief executive officer of the Allen Institute for Artificial Intelligence in Seattle;
  • Toby Walsh, professor of artificial intelligence at the University of New South Wales in Sydney, Australia, and at Australia’s Centre of Excellence for Information Communication Technologies; and
  • Bart Selman, computer science professor at Cornell University.

One articulated concern is that autonomous weapons can be deployed search and destroy targets.  The military maintains that there is some human control over autonomous drones but the human element is de mininis with what control that does exist rapidly diminishing[xxxvii].

A Redditor wrote that, “This technology you are developing sounds at its essence like the centralization of knowledge intake.  Ergo, whomever controls this will control what information people make their own.”

The day prior Mr. Musk sent the following tweet asking humanity to come together to sign a petition opposing to military AI armament[xxxviii]:

Elon Musk (@elonmusk)

7/27/15, 9:41 PM

If you’re against a military AI arms race, please sign this open letter:

tinyurl.com/awletter

“The time for society to discuss this issue is right now. It’s not tomorrow[xxxix],” implored Mr. Etzioni

_______________________________________________________

About the Author

Cynthia M. Lardner holds a journalism degree, she is a licensed attorney and trained as a clinical therapist.  Her philosophy is to collectively influence conscious global thinking understanding that everything and everyone is subject to change given the right circumstances;   Standard Theory or Theory of Everything.

Ms. Lardner has been using AI on her IPhones and Samsung Galaxy for over a year.  While she learned about writing, geopolitics and leadership through active monitoring and behavioral conditioning, the positive aspects were far outweighed by the personal, financial and emotional damage done, with the later documented in the recent WikiLeaks files from Milan and Annapolis based The Hacking Team. As such, she is strong proponent of national and international governance of not only AI but of government sponsored hacking.

Ms. Lardner has accounts on Twitter, Facebook, Google Plus and LinkedIn, as well as accounts under the pseudonym of Deveroux Cleary, and is globally ranked in the top 1% of all account holders.

She is available for presentations and for professional consultation.

Her dream is to testify before a Senate Subcommittee and to facilitate international cooperation through the United Nations on the issue of regulating AI.

__________________________________________________

ENDNOTES

[i] Artificial Intelligence, Wikipedia, as found on www athttps://en.wikipedia.org/wiki/Artificial_intelligence, citing, McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1 (“This is a central idea of Pamela McCorduck’s Machines Who Think. She writes: “I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition.” (McCorduck 2004, p. 34) “Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized.” (McCorduck 2004, p. xviii) “Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn’t, we have engaged for a long time in this odd form of self-reproduction.” “(McCorduck 2004, p. 3) She traces the desire back to its Hellenistic roots and calls it the urge to “forge the Gods.” (McCorduck 2004, pp. 340–400).

[ii] Nilsson, Nils (2010). The Quest for Artificial Intelligence: A History of Ideas and Achievements. New York: Cambridge University Press. ISBN 978-0-521-12293-1.

[iii] Holley, Peter, “Bill Gates on dangers of artificial intelligence: ‘I don’t understand why some people are not concerned’”, January 29, 2015, The Washington Post, as found on the www athttps://www.washingtonpost.com/blogs/the-switch/wp/2015/01/28/bill-gates-on-dangers-of-artificial-intelligence-dont-understand-why-some-people-are-not-concerned/.

[iv] Clark, Jack, and Bass, Dina, “Here’s What Inspired Top Minds in Artificial Intelligence to Get Into the Field”, July 29, 2015, Bloomsberg News, as found on the www at http://www.bloomberg.com/news/articles/2015-07-29/here-s-what-inspired-top-minds-in-artificial-intelligence-to-get-into-the-field.

[v] Nöe, Alva, “Artificial Intelligence, Really, Is Pseudo-Intelligence   NPR News, November 21, 2014, as found on the www athttp://www.npr.org/sections/13.7/2014/11/21/365753466/artificial-intelligence-really-is-pseudo-intelligence?sc=17&f=&utm_source=iosnewsapp&utm_medium=Email&utm_campaign=app(Alva Noë is a philosopher at the University of California at Berkeley where he writes and teaches about perception, consciousness and art.)(“Artificial intelligence isn’t synthetic intelligence: It’s pseudo-intelligence.

This really ought to be obvious. Clocks may keep time, but they don’t know what time it is. And strictly speaking, it is we who use them to tell time. But the same is true of Watson, the IBM supercomputer that supposedly played Jeopardy! and dominated the human competition. Watson answered no questions. It participated in no competition. It didn’t do anything. All the doing was on our side. We played Jeapordy! with Watson. We used “it” the way we use clocks.”)

[vi] “Stephen Hawking warns artificial intelligence could end mankind”,December 2, 2014, BBCNews, as found on the www athttp://www.bbc.com/news/technology-30290540.

[vii] Clark, Jack, and Bass, Dina, Infra Endnote No. 4.

[viii] “Stephen Hawking warns artificial intelligence could end mankind”, December 2, 2014, BBC News, as found on the www athttp://www.bbc.com/news/technology-30290540.

[ix] Williams, Lauren C., “New Drone Can Hack Into Your Smartphone To Steal Usernames And Passwords”, March 20, 2015, Think Progress, as found on the www at http://thinkprogress.org/home/2014/03/20/3416961/drones-hack/ (“A new hacker-developed drone can lift your smartphone’s private data from your GPS location to mobile applications’ usernames and passwords — without you ever knowing. The drone’s power lies with a new software, Snoopy, which can turn a benign video-capturing drone into a nefarious data thief.

Snoopy intercepts Wi-Fi signals when mobile devices try to find a network connection.

As a part of its controversial surveillance programs, the U.S. National Security Agency already uses similar technology to tap into Wi-Fi connections and control mobile devices.

With the right tools, Wi-Fi hacks are relatively simpleto pull off, and are becoming more common. Personal data can even be sapped from your home’s Wi-Fi router.” (Emphasis Added)).

See also Fox-Brewster, Thomas, “Hacking Team’s $175,000 Apple Store And Google Play Surveillance Apps Flirt With Illegality” July 7, 2015, Forbes,http://www.forbes.com/sites/thomasbrewster/2015/07/08/hacking-team-iphone-android-malware

(“They both contained a “Custom App Project” amidst a suite of offensive technologies, including the much-publicised Galileo tool. The first list promised a “dedicated, valid Android app published on the Play Store… that can be used to infect a controlled number of target devices” for €160,000 ($175,000). The offer for the New York attorney, drafted in April this year, was far cheaper and provided more, costing $60,000 and offering a malicious app for Apple’s App Store as well as Google’s market.”).

[x] Lardner, Richard, “5 Things to Know About Artificial Intelligence and Its Use”, July 28, 2015, Associated Press, as found on the www athttp://abcnews.go.com/Technology/wireStory/things-artificial-intelligence-32743981.

[xi]  While this is not a paper on artificial psychology, having some understanding of its foundational underpinnings is essential.

See Crowder, James, and Friess, Shelli, “Artificial Psychology: The Psychology of AI”, International Multi-Conference on Informatics and Cybernetics, July 2014, Research Gate, as found on the www athttp://www.researchgate.net/publication/235219143_Artificial_Psychology_The_Psychology_of_AI(“With this fully autonomous, learning, reasoning, artificially intelligent system (an artificial brain), comes the need to possess constructs in its hardware and software that mimic processes and subsystems that exist within the human brain, including intuitive and emotional memory concepts. Presented here is a discussion of the psychological constructs of artificial intelligence and how they might play out in an artificial mind.

Here we classify QCC into three components: • Metacognitive Knowledge: (also called metacognitive awareness) is what the system knows about itself as a cognitive processor (Crowder and Friess 2011)]. Metacognitive Regulation: is the regulation of cognition and learning experiences through a set of activities that help the system control its learning (Crowder and Friess 2012). This may be based on its understanding of its own ” knowledge gaps.”); and Friedenberg, Jay, Artificial Psychology: The Quest for What It Means to Be Human, Psychology Press, Oct 18, 2010.

See gen  “Artificial psychology” on @Wikipedia:

https://en.wikipedia.org/wiki/Artificial_psychology?wprov=sfti

(“Artificial Psychology is a theoretical discipline proposed by Dan Curtis (b. 1963). The theory considers situation when the artificial intelligence approaches the level of complexity where the intelligence meets two conditions:

Condition I

A Makes all of its decisions autonomously

B Is capable of making decisions based on information that is New, Abstract, Incomplete

C The artificial intelligence is capable of reprogramming itself based on the new data

D And is capable of resolving its own programming conflicts, even in the presence of incomplete data. This means that the intelligence autonomously makes value-based decisions, referring to values that the intelligence has created for itself.

Condition II

All four criteria are met in situations that are not part of the original operating program

When both conditions are met, then, according to this theory, the possibility exists that the intelligence will reach irrational conclusions based on real or created information. At this point, the criteria is met for intervention which will not necessarily be resolved by simple re-coding of processes due to extraordinarily complex nature of the codebase itself; but rather a discussion with the intelligence in a format which more closely resembles classical (human) psychology.

If the intelligence cannot be reprogrammed by directly inputting new code, but requires the intelligence to reprogram itself through a process of analysis and decision based on information provided by a human, in order for it to overcome behavior which is inconsistent with the machines purpose or ability to function normally, then artificial psychology is by definition, what is required.

The level of complexity that is required before these thresholds are met is currently a subject of extensive debate. The theory of artificial psychology does not address the specifics of what those levels may be, but only that the level is sufficiently complex that the intelligence cannot simply be recoded by a software developer, and therefore dysfunctionality must be addressed through the same processes that humans must go through to address their own dysfunctionalities. Along the same lines, artificial psychology does not address the question of whether or not the intelligence is conscious.

As of 2015, the level of artificial intelligence does not approach any threshold where any of the theories or principles of artificial psychology can even be tested, and therefore, artificial psychology remains a largely theoretical discipline.”).

[xii] “Cybersecurity and Artificial Intelligence: A Dangerous Mix,” February 24, 2915, Infosec Institute, as found on the www athttp://resources.infosecinstitute.com/cybersecurity-artificial-intelligence-dangerous-mix/.

[xiii] Id.

[xiv] Id.

[xv] Gleiser, Marcelo, “Are We To Become Gods, The Destroyers Of Our World?”, May 6, 2015, NPR News, as found on the www athttp://www.npr.org/sections/13.7/2015/05/06/404640670/are-we-to-become-gods-the-destroyers-of-our-world?sc=17&f=&utm_source=iosnewsapp&utm_medium=Email&utm_campaign=app(Marcelo Gleiser is a theoretical physicist and cosmologist — and professor of natural philosophy, physics and astronomy at Dartmouth College. He is the co-founder of 13.7, a prolific author of papers and essays, and active promoter of science to the general public. His latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning.).

[xvi] Infosec, Infra Endnote No. xii.

[xvii] “Stephen Hawking warns artificial intelligence could end mankind”, December 2, 2014, BBC News, as found on the www athttp://www.bbc.com/news/technology-30290540.

[xviii] Infosec, Infra Endnote xii; and Holley, Peter, Infra Endnote No. iii.

[xix] Id.

[xx] Holley, Peter, Infra Endnote No. iii.

[xxi] Id. See also “Now Bill Gates Is ‘Concerned’ About Artificial Intelligence”, Newsy Tech, YouTube, as found on the www athttps://www.youtube.com/watch?v=OLHxQBAvWIQ&feature=youtu.be.

[xxii] “The state of artificial intelligence”, June 25, 2013, as found on the www athttp://fedscoop.com/the-state-of-artificial-intelligence.

[xxiii] Id.  See also Infosec, Infra Endnote No. xii.

[xxiv] Xynou, Maria, “Hacking without borders: The future of artificial intelligence and surveillance”, as found on the www at http://cis-india.org/internet-governance/blog/hacking-without-borders-the-future-of-artificial-intelligence-and-surveillance.

[xxv] Id.

[xxvi] Infosec, Infra Endnote No. xii.

[xxvii] Id. (“Although the CTS project was initially intended to be used for solely military purposes, its use for civil purposes, such as combating crime, remains a possibility. In 2003 DARPA stated that 40 million surveillance cameras were already in use around the world by law enforcement agencies to combat crime and terrorism, with 300 million expected by 2005. Police in the U.S. have stated that buying new technology which may potentially aid their work is an integral part of the 9/11 mentality. Considering the fact that literally millions of CCTV cameras are installed by law enforcement agencies around the world and that DARPA has developed the software that has the capability of automatically analyzing data gathered by CCTV cameras, it is very possible that law enforcement agencies are participating in the CTS network.

However if such a project was used for non-military level purposes, it could raise concerns in regards to data protection, privacy and human rights. As a massive network of surveillance cameras, the CTS ultimately could enable the sharing of footage between private parties and law enforcement agencies without individuals´ knowledge or consent. Databases around the world could be potentially linked to each other and it remains unclear what laws would regulate the access, use and retention of such databases by law enforcement agencies of multiple countries.”).

[xxviii] Lardner, Richard, Infra Endnote No. x.

[xxix] “Pentagon Wants a ‘Real Roadmap’ to Artificial Intelligence”, January 6, 2015, Next Gov Newsletter, as found on the www athttp://www.nextgov.com/defense/2015/01/pentagon-wants-real-roadmap-artificial-intelligence/102297/

http://www.nextgov.com/defense/2015/01/pentagon-wants-real-roadmap-artificial-intelligence/102297/

(“In November, Undersecretary of Defense Frank Kendall quietly issued a memo to the Defense Science Board that could go on to play a role in history.

The memo calls for a new study that would “identify the science, engineering, and policy problems that must be solved to permit greater operational use of autonomy across all war-fighting domains…Emphasis will be given to exploration of the bounds-both technological and social-that limit the use of autonomy across a wide range of military operations. The study will ask questions such as: What activities cannot today be performed autonomously? When is human intervention required? What limits the use of autonomy? How might we overcome those limits and expand the use of autonomy in the near term as well as over the next 2 decades?”).

[xxx] Lardner, Richard, Infra Endnote No. x.

[xxxi] “The state of artificial intelligence”, June 25, 2013, as found on the www athttp://fedscoop.com/the-state-of-artificial-intelligence.

[xxxii] Id. (In the mental health arena, DARPA has embarked upon the Detection and Computational Analysis of Psychological Signals program. The goal of the DCAPS program is to develop new analytical tools capable of evaluating the psychological status of war fighters in an attempt to improve psychological health awareness and encourage post-traumatic stress disorder sufferers to seek help earlier.).

[xxxiii] “The computer will see you now:  A virtual shrink may sometimes be better than the real thing”, August 16, 2014, The Economist, as found on the www at http://www.economist.com/news/science-and-technology/21612114-virtual-shrink-may-sometimes-be-better-real-thing-computer-will-see (“Ellie [a computer] could change things for the better by confidentially informing soldiers with PTSD that she feels they could be a risk to themselves and others, and advising them about how to seek treatment.).

[xxxiv] Lardner, Richard, Infra Endnote No. x.

[xxxv] Tucker, Patrick, “US Drone Pilots Are As Skeptical of Autonomy As Are Stephen Hawking and Elon Musk”, July, 28, 2015, as found on the www athttp://www.defenseone.com/technology/2015/07/us-drone-pilots-are-skeptical-autonomy-stephen-hawking-and-elon-musk/118680/.

[xxxvi] Lardner, Richard, Infra Endnote No. x.

[xxxvii] Tucker, Patrick, Infra Endnote No. xxxv (“The United States military maintains a rigid public stance on robot weapons. It’s enshrined in a 2012 DOD policy directive that says that autonomous weapons “shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”

But the military keeps working steadfastly at increasing the level of autonomy in drones, boats, and a variety of other weapons and vehicles. The Air Force Human Effectiveness Directorate is working on a software and hardware package called the Vigilant Spirit Control Station, which is designed to allow a single drone crew, composed primarily of a drone operator and a sensor operator, to control up to seven UAVs by allowing the UAVs to mostly steer themselves.”).

[xxxviii] Elon Musk (@elonmusk), 7/27/15, 9:41 PM, If you’re against a military AI arms race, please sign this open letter: tinyurl.com/awletter, as found on the www at https://twitter.com/elonmusk?lang=en.  See also Elon Musk On AI: ‘We’re Summoning The Demon’, October 26, 2014, Newsy Tech, YouTube, as found on the www at https://www.youtube.com/watch?v=4BSsgJsmDNs&feature=youtu.be

[xxxix] Lardner, Richard, Infra Endnote x.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s