Applications for Artificial Intelligence

by Malachi Walker

On Wednesday, September 4, 2019, WhiteHawk President and CEO, Terry Roberts moderated a panel on Applications for Artificial Intelligence (AI). The panel included

  • TestDirector of Software Engineering Institute (SEI) Emerging Technology Center at Carnegie Melon University, Dr. Matt Gaston;
  • Founding director of the Center for Security and Emerging Technology at Georgetown University, Jason Matheny;
  • AI Portfolio Lead for the Office of Naval Research, Brett Vaughan; and
  • Chief of End User Services for the Defense Intelligence Agency, E.P Matthew.

These deep Artificial Intelligence professionals offered a unique perspective from each of their respective fields of study and experience. The discussion touched on how their work is currently being influenced by artificial intelligence, and where they see the potential for AI and its applications.

Military

AI Portfolio Lead, Brett Vaughan shared that the United States Navy has been deeply engaged with artificial intelligence for a long time and using its advantages to assist the U.S and its allies. A common frustration within the military is developing a universal AI definition, landscape, and prioritization across all branches. Vaughan is hoping the Navy can lead the charge, stating that they are in pursuit of applying artificial intelligence in reusing information, accepting information, machine learning, knowledge, and planning. Vaughan is also looking for greater commitment towards embracing “AI Adoption,” which he defines as using artificial intelligence as “a function of priority, technology, opportunity, and utilization.” While AI is versatile in its applications, Vaughan’s perspective suggests early adoption and prioritization of artificial intelligence can be a key asset across many military missions.

Education

Applying artificial intelligence in our everyday lives necessitates a perspective on how we are educating and learning about its capabilities. Dr. Matt Gaston explained Carnegie Melon’s heavy involvement with artificial intelligence from an early stage. The university’s Software Engineering Institute (SEI) Emerging Technology Center (ETC) was created in 2011 to help navigate and understand intelligence technology needs, with the goal of using, integrating, and evolving technologies through AI. Through these efforts, Dr. Gaston continues to discover and enable IC applications of artificial intelligence. AI is key for life critical decisions across all endeavors of knowledge. Dr. Gaston added that continued research is necessary to eliminate bias in current AI, so that future AI analytics can be programmed without bias.

Technology

From the perspective of Dr. Jason Matheny, founding director for the Center for Security and Emerging Technology, we cannot make the mistake of treating AI innovation and application like other technologies. Dr. Matheny advocates heavily for federal involvement in  innovation, stating that a research and development ecosystem in AI depends greatly on federal funding and a willingness to address issues that may not otherwise receive attention.

Dr. Matheny also stressed the need for addressing the security risks in AI during development. Where most IT systems do not take malicious adversaries under consideration, AI should begin with the adversaries in mind so as to not compromise protection with innovation. Matheny has already found adversarial examples that need to be taken in account. This includes a technique used to create an illusion with machine learning systems; a Trojan, which changes the system of learning so that the system learns the wrong information; and Modeling, where engineering training models intentionally corrupt AI. To combat these threats and foster innovation, Matheny says further investments and testing of systems for adversarial attacks are needed.

Intelligence

E.P. Matthew, Chief of End User Services for the Defense Intelligence Agency (DIA), preceded his remarks commenting that the IT perspective of AI is still in the infancy stage of adoption. The industry is still looking to make sense of data that is available in many forms, attempting to discern internal, singular, and external data throughout multiple systems. The advantage of AI, he suggests, is its ability to make correlations and assessments that humans cannot. Matthew added that AI leverages capabilities at machine speed, not human speed, and with such a tremendous growth in volume and velocity of machine generated data, machines should be the ones that navigate the data generated by other machines. AI makes this possible.

Applications for AI panel

While the viewpoints and experiences may have differed among the panelists, the commonality that united them was their belief in the usefulness of AI in large solution opportunities across their own industry and our future world. One panelist stated that applied AI in basic research can increase efficiency in an information saturated environment by breaking down data, infrastructure, and environment and using them to build a healthy ecosystem where AI can succeed. AI was referenced as a White House budget priority by means of the National Defense Authorization Act (NDAA), announcing that research has been increasing across agencies and the NDAA aims to double funding in this arena. All panelists agreed that the United States continues to attract and retain the best and brightest in AI development. The advantages, according to them, are the strong design and development capabilities relative to innovative AI technology. However, more investments to benchmark foreign systems are needed.

Innovation is happening everywhere, and the United States needs to press and invest to stay ahead. To do this, one panelist advocated seeking a specific problem to get to AI computing solutioning, as AI does not have a one size fits all solution. The importance of looking at mission capabilities and getting assistance from experts without revealing compromising information was also stressed. This, according to Dr. Gaston, allows the public to build a platform with machine learning capabilities. The discussion also highlighted the importance of adopting the NARS (Non-Axiomatic Reasoning System) AI initiative. This open source project needs analysts that can assist with classifying and categorizing data, focus on core infrastructural components that enable machine learning, and shape workforce to learn and work with AI.

The audience was informed that there are several ongoing opportunities for all citizens to learn about naval projects in AI. One panelist spoke of the applications within the Navy, which is active in adoption and adaptation with technology and in the lab. He applied this to Small and Mid-sized Businesses, stating that creating IT solutions starts with identifying the problem. Other panelists echoed this sentiment, saying that after identifying the problem, identify your budget, and access resources online to find affordable, low-cost solutions.

Fears of AI Application

When asked about any fears with AI’s application, all panelists held confidence in AI’s benefits and assured there was little to fear. Fear of AI comes from lack of understanding, but there remains hope that machines will be constantly supervised and corrected in order to uphold ethics and prevent biases. The other panelists also showed optimism with AI that was qualified with words of caution. This included a low budget headway into AI that protects open source and ethics because the computer science community will gravitate toward the country with the most ethical opportunities. One panelist expressed optimism about the future, but stated that to be effective in AI, space shifts must be made that embraces and does not ignore The Four Horsemen in AI conflict: scale, speed, strategic coherence, and knowledge.

Another advocated sending individuals to academic institutions to learn about AI functioning to ensure the standards of machine learning can increase. The conversation included a call to shape the world to increase artificial intelligence involvement from all backgrounds in industry. The main argument was that if algorithm and methodology cannot be explained, then it cannot be trusted, as it impacts the industry brand. The panel concluded with the suggestion that the audience seek a senior champion that believes that failure is ok. Patience is necessary, and failure must be learned from and embraced in order to make progress in a risk-averse culture. The goal of the audience, from one perspective, should be to understand data and cultural change to identify the problem you want to solve. To solve this problem, we have to stop speaking in system terms and start speaking in solution terms.

 WhiteHawk’s Terry Robert’s commended this panel’s remarkable expertise and drive regarding the scooping and execution of Artificial Intelligence enablement and its exciting future in our everyday life. In her own words Terry described the panel as “a deep microcosm of AI understanding, expertise and practice. Innovation leaders who understand the potential and are leading responsible problem-solving research, development and adoption within their respective academic and government organizations - these are AI practitioners who should be partnered with.”    

You can view the entire panel courtesy of CybersecurityTV by clicking the following link

 

Recommended Posts
Jason Miller

Read this article about why the navy is giving agencies, industry a much-needed wake-up call on supply chain risks.

Lily Da Huang

On January 28, the National Cyber Security Alliance (NCSA) will lead the 2018 National Data Privacy Day campaign which focuses on increasing…

Aymone Kouame

On Tuesday October 24, 2017, a new crypto-ransomware known as Bad Rabbit, hit several organizations across Russia, Ukraine, Germany, Turkey,…