The modern warfighter needs to rely on various technologies and increasingly advanced systems to help provide advantages over capable adversaries and competitors. The US Department of Defense (DoD) understands this all too well and must therefore integrate Artificial Intelligence and Machine Learning more effectively across their operations to maintain advantages.
To remain competitive, the US Army has created the Army Talent Management Task Force to address the current and future needs of the war fighter. In particular, the Data and Artificial Intelligence (AI) Team shapes the creation and implementation of a holistic Officer/NCO/Civilian Talent Management System. This system has transformed the Army's efforts to acquire, develop, employ, and retain human capital through a hyper-enabled data-rich environment and enables the Army to dominate across the spectrum of conflict as a part of the Joint Force. Kristin Saling, Chief Analytics Officer & Acting Dir., Army People Analytics is an integral part of getting the Army AI ready and shared her insights with us for this article. She will also be presenting at an upcoming AI in Government event where she will discuss where the US Army currently stands on its data collection and AI efforts, some of the challenges they face, and a roadmap for where the DoD and Army is headed.
What are some innovative ways you're leveraging data and AI to benefit the Army Talent Management Task Force?
LTC Kristin Saling: We are leveraging AI in a number of different ways. But one of the things we're doing that most people don't think about is leveraging AI in order to leverage AI – and by that I mean we're using optical character recognition and natural language processing to read tons and tons of paper documents and process their contents into data we can use to fuel our algorithms. We're also reading in and batching tons of occupational survey information to develop robust job competency models we can use to make recommendations in our marketplace.
On the other end, we're leveraging machine learning models to predict attrition and performance for targeted retention incentives. We have partnered with the Institute for Defense Analysis to field the Retention Prediction Model – Army (RPM-A) which generates an individual prediction vector for retention for every single Active Army member. We're developing the Performance Prediction Model – Army (PPM-A) as a companion model to use a number of different factors, from performance to skills crosswalked with market demand, to identify the individuals the Army most wants to keep. These models used in tandem and informed by a number of retention incentive randomized controlled trials will provide a powerful toolkit for Army leaders to provide the most likely to succeed incentive menus to the personnel likely to attrition that the Army most wants to keep.
How are you leveraging automation at all to help on your journey to AI?
LTC Kristin Saling: We are looking at ways to employ Robotic Process Automation throughout the people enterprise. RPA is an unsung hero when it comes to personnel processes and talent management, especially in a distributed environment. We can automate a huge portion of task tracking, onboarding, leave scheduling, and so forth, but I'm particularly looking at it in terms of data management. We're migrating a huge portion of our personnel data from 187 different disparate systems into a smaller number of data warehouses and enterprise systems, and this is the perfect opportunity to use RPA to ensure that we have data compatibility and model ready datasets.
How do you identify which problem area(s) to start with for your automation and cognitive technology projects?
LTC Kristin Saling: We do a lot of process mapping and data mapping before we start digging into a project. We need to understand all the different parts of the system that our changes are going to effect. And we revisit this frequently as we develop an automation solution. Sometimes the way we're developing the solution renders different parts of the system obsolete and we need to make sure we're bypassing them appropriately with the right data architecture. Sometimes there are some additional things we need to build because of where the information generated by the new automation needs to be fed. It's just important for us to remember that nothing we build truly stands alone, that it's all part of a larger system.
What are some of the unique opportunities the public sector has when it comes to data and AI?
LTC Kristin Saling: The biggest opportunities I think we have (in the Army at least) are that we have extremely unique and interesting problem sets and applications, and we also have an extremely large and innovative workforce. While we have a number of challenges, we also have a lot of really talented people joining our workforce who were drawn here by the variety of applications we have to solve and some of the unique data sets we have to work with.
What are some use cases you can share where you successfully have applied AI?
LTC Kristin Saling: Successfully applying AI is a tricky question. We've created successful AI models, but applying them becomes extremely difficult when you consider the ethics of taking actions on the information we're generating. The first I can cite is the STARRS program – Studies to Assess Readiness and Resilience in Service members. It's an AI model in development that identifies personnel at the highest risk for harmful behaviors, particularly suicide. Taking that information and applying it in an ethical way that enables commanders and experts to enact successful interventions is extremely difficult. We have a team of scientific experts working on this problem.
Can you share some of the challenges when it comes to AI and ML in the public sector?
LTC Kristin Saling: The availability of good data is a challenge. We have a lot of data, but not all of it is good data. We also have a lot of restrictions on our ability to use data, from the Privacy Act of 1974, the Paperwork Reduction Act, and all of the policies and directives derived from those. Without an appropriate System of Record Notice (SORN) that states how the data was collected and how it is to be used, we can't collect data, and that SORN significantly limits how that data can be used. The best AI models can't make better decisions on bad data than we can – they can just make bad decisions faster. We really have to get at our data problem.
How do analytics, automation, and AI work together at your agency?
LTC Kristin Saling: We see all of these things as solutions in our data and analytics toolkit to improve processes. Everything starts with good data first and foremost, and automation, when inserted in the right places in the process, helps us get to good data. We treat AI as the top end of the progression of analytics – descriptive analytics help us see ourselves, diagnostic analytics help us see what has happened over time and potentially why, predictive analytics help us see what is likely to happen, prescriptive analytics recommend a course of action using the prediction, and if you add one more step in decision autonomy, enabling the machine to make the decision instead of just recommending a course of action, you have narrow artificial intelligence. We've been most successful when we've looked at our data, our analytics, our people, our decision processes, and the environment these operate in as a total system than when we've tried to develop solutions piecemeal.
How are you navigating privacy, trust, and security concerns around the use of AI?
LTC Kristin Saling: Our privacy office, human research protection program, and cyber protection programs do a lot to mitigate some concerns about the use of AI. However, there are still a lot of concerns about the ethical use of AI. To a large portion of the population, it's a black box entity or black magic at best, Skynet in the making at worst. The best way for us to combat this is education. We're sending many of our leaders to executive courses on analytics and artificial intelligence, and developing a holistic training program for the Army on data and analytics literacy. I firmly believe when our leaders better understand how artificial intelligence works and walk through appropriate use cases, they will be able to make better decisions about how to ethically employ AI, better trust how we employ it, and ensure that we are preserving privacy and data/cyber security.
What are you doing to develop an AI ready workforce?
LTC Kristin Saling: Our Army AI Integration Center (AI2C - formerly the Army AI Task Force) has established an education program called AI Scholars, where about 40 students a year, both military and civilian, will take part in graduate degree programs at Carnegie Mellon and eventually at other institutions in advanced data science and data engineering, followed by a tour at the AI2C applying their skills to developing AI solutions. Our HQDA G-8 has sponsored over 50 Army leaders through executive courses in AI at Carnegie Mellon, and ASA(ALT) has sponsored still more through executive courses at the University of Virginia. Our FA49 Operations Research and Systems Analysis career specialty and FA26 Network Science and Data Engineering career specialty have sponsored officers through graduate level AI programs. Through all of this education and its application to a host of innovative problem sets, the Army has created a significant AI workforce and is continually working to improve how we employ this workforce.
What AI technologies are you most looking forward to in the coming years?
LTC Kristin Saling: I'm a complexity scientist by background, and I'm fascinated about the applications of this field in autonomous systems and particularly swarm, and the host of things we'll be able to do with these applications. That's my futurist side speaking. My practical side is just looking forward to simple automation being widely adopted. If we can just modernize our Army leave system from its current antiquated process, I will count that as a success.
Source: Forbes.com
No comments:
Post a Comment