Articles

AI and the Future of the U.S. Defense Department

July 29, 2019



Wow thank you oh I appreciate it thank you so much so I'm Patrick Tucker I'm technology editor for defense one it's a national security news site from Atlantic media and before that I was the editor of a magazine called the futurist for a while the deputy editor I should say I know that guy will get mad if I say that I wrote a book called the naked future what happens in a world that anticipates your every move about predictive analytics and that came out in 2014 and so that helped me develop an appreciation for how big data is going to change user experience for anyone that accesses the digital world which is everybody and how it's going to especially change the future and that led me to defense one where I now cover the effects of emerging technology on national security in like a really broad way that includes like everything from next-generation fifth sixth generation fighter jets to Russian election interference to quantum etc I don't know how well I cover that breath but I get to talk to really smart people all the time and so most of what I do is telling you what the smart people told me so sorry about that also if you're just a heads up this is not like a super technical talk they'll be a couple of very sort of elementary terms related to artificial intelligence but I can't help you with your Markov model like that's that's someone else or you know your regression analysis if you have that information please share both with us all during the the question and answer period I'm going to give a little talk then we're going to open it up for questions and answers about my experience covering the development of artificial intelligence in the Defense Department and I feel really lucky to be here because of the people that I talked to this seems like one of the by far most like just sort of on it lit technically aware and like on the trend groups that I get to talk to so I hope you all will talk to each other a little bit too we'll make this like an interactive type discussion about some of these issues because you know the the information that you have is every bit as important as information that that I have to me if not infinitely more so because I already have my information I know what it's worth and now you're about to know too so like as I said I've been covering artificial intelligence in a sort of broad way with the futurist writing my book and at the defense one for several years and watched its evolution a little bit and watch the Defense Department enthusiastically pick it up and and here's where they're gonna take it in the years ahead and before we get into that though I little like sort of humanities question let's see if this works there we go who knows the etymological origins of the word robot raise your hand if you know what language robot is based in is it okay right here we've got a volunteer it is check oh my god I'd like I always somebody that knows it is check it is derived from robot ax which is service derived from the Slavic for raba which is slave and it's put into modern usage by a playwright named Karl tipic Czechoslovakian playwright and he wrote a play called Rossum's Universal robots Despres it premiered in 1920 so it's just about a hundred years old and I bring this up when I talk about artificial intelligence because it's keenly important to understand how our discussion of artificial intelligence where it comes from and especially where all of our projections from it come from because they're actually born into the idea of it at its Genesis so Rossum's Universal robots Folsom's Universal robots see if this whole narrative strikes you is familiar Act one an inventor a you might typecast him has an Elon Musk type or a Jeff Bezos type Silicon Valley mega fauna person invents a new mechanical race of people there they're kind of more like the skin jobs and Blade Runner than something that's like super duper mechanical but they are definitely in artificial intelligence and they're meant to do the service of Act one is he becomes fantastically wealthy Silicon Valley wins everyone is really excited about this future curtain act to the robots take everyone's job massive unemployment civil disruption civil war bankruptcy act three guess who's trying to kill everybody go ahead and guess so this is important because our idea that we're going to first make a lot of money and save a lot of Labor off of robots and then they'll take our job and then they'll kill us that's something that is a projection that is born into this very idea so we carry it with us in all of our conversations about that before we get to how the United States Defense Department wants to use artificial intelligence I think that's important to understand that our discussion about this on a sort of emotional level in terms of our cultural history is informed by that projection and you have to recognize it put it in a box understand for what it is before you can have a real discussion about the very real challenges and the very real opportunities that artificial has because artificial intelligence has in the future because employed incorrectly or wrongly designed poorly it can have very real and very bad consequences and you do have to deal with those in a way that's separate from our cultural history of the term so exponential advancement artificial intelligence is very much a just trend of life right now this is one of my favorite quotes about the evolution of this technology George Plimpton wonderful writer the editor of The Paris Review for a long time was watching deep blue beat Garry Kasparov in chess in 1997 it was a hotel in New York and he walked out of that hotel and he had this observation about what he had just seen he said the machine isn't going to walk out of the hotel there and start doing extraordinary things it can't manage a baseball team can't tell you what to do with a bad marriage right now that's 1997 so roughly 20 years ago now consider how artificial intelligence has advanced since then just in terms of what's like constantly screaming across the headlines in terms of tech news that we probably all consume consider alphago the neural net from deep mind that beat the world's all the world's best NGO champions like every every one of them consider how much more difficult go is than chess this comes from Dennis Hamleys the he's the kind of guy behind deep mind one of the core founders the number of branching moves in chess are about 30 to 40 in go it's like 200 to 250 so it is an exponentially more difficult task what are the of this list here would you not trust in artificial intelligence or a robot running an artificial intelligence program to do today would you can they walk we've all seen this thing of Boston Dynamics and and their Atlas robot doing backflips they lard are that pack out of that room I'll bet before they do that but they we see that it can walk in the wild manage a bad marriage does it have access to my entire textual history my entire email history my what I do at the end of the day my likelihood of going to the gym versus having a drink can it manage all of that data get it in output a probability for me of making a mistake in my marriage it's a possibility that's a realistic now if you consider the applications of big data within artificial intelligence which is not something George Plimpton really had the benefit to consider when he made this pronouncement in 1997 so digital information coupled with big data that's what's truly transformative about this technology right now and also scalable compute that's what changes this 1997 forecast into something where we see exponential advancement and it's everywhere machine learning is actually everywhere as I think a lot of folks know is you certainly you guys know but it's the it's helping my phone find the closest cell tower signal and then it's determining what objects show up in my Facebook feed and and then it's selling all of that to marketers that then make another decision about what ads to show me etc etc so right now as we experience machine learning when it screws up when somebody applies it in a way that or designs some machine learning solution to a problem whether it's ad optimization or something else and it screws up it's less than a perfect solution the outcome is probably a slight annoyance on my end or a like a smidgen of lost revenue on the end of the advertiser or whoever is using it and that's going to change as these as machine learning low-level machine learning moves into more and more aspects of life who's familiar with this term narrow AI it's good a handful of you so there's it's important to sort of break apart the two merging fields of artificial intelligence one is far more in the future it's much bigger narrow AI is the AI the dumb BOTS that we're dealing with really all the time now and are going to be putting in charge of more and more important functions of life and the real there's real risks here in terms of getting these applications wrong getting this the design of these wrong getting putting them in charge of the wrong stuff and that question we need to ask is did we define this problem correctly and do we understand the scaling functions because when it goes wrong it doesn't look like an apocalypse it really looks like this this is from The Sorcerer's Apprentice wonderful part of Fantasia 1950s Disney film Mickey is a sorcerer or a programmer if you will and he's got a very mundane task it's very boring and it's it's very repetitive exactly the sort of thing you could apply machine learning to we has to move a bunch of water from one place to another he developed a program which is these brooms that have to pick up the water and they move it to one place from one place to another but he miscalculates both the volume that the water holding container can take and the rate at which they're going to do this and so he over produces and they trample him and that's an example of narrow AI applied with a poor scaling function right another example is high-frequency trading when you have flash crashes which are less common now but this was a big concern right where you have a whole bunch of very simple programs that are trained to dump stock when it reaches a certain of volatility or when the price depreciates past a certain point and then because multiple firms were using them you had this incident happened where all of them were reacting to what the other high frequency trading dumping algorithm was doing and you wind up with you know billions of dollars of value temporarily lost and it came back but this is a real consequential thing so the question to ask and this is something that again the as we approach real applications of artificial intelligence for the military that they're going to that they're increasingly asking and they need to ask and you can help them with if in terms of how you're applying your machine learning solution in terms of how you were applying your narrow AI are you understanding the scale correctly are you designing or asking the right question and that's something that the commanders in particular are very concerned with when they talk about how AI might help their life the next the other brand of artificial intelligence is general AI this is the one that it's still little ways off it's kind of based on neural networking but really it doesn't exist yet in any form we can point to and this is the applications of artificial intelligence to a general set of problems so this is closer to the way a human being thinks because we are general-purpose intelligences we apply our brain to a set of problems that is general winning chess walking out of the hotel room getting in a cab fixing a bad marriage there's a general set of problems if you have a single artificial intelligence that can handle a wider set of problems than a narrow one then you have a general artificial intelligence in many ways people say that deep mind might be on the cusp of being accurately described as a type of this so long as you're giving it rather structured data if you can gamify your problem set supposedly deep mind can handle it like really well but that again when we talk about a general set of problems we're talking about the difference between it versus a narrow set of problems we're talking about the difference between pretty structured and largely unstructured data because that's what we deal with all the time is unstructured data that's what we do is walk around and structured data all day like we evolved our brain of 450 million years to do that and so there's real questions about how well these systems particularly neural nets that we're creating fed with all sorts of open source intelligence information diagnostic information whatever you're feeding to it absent that evolutionary process there's a big philosophical question about how well they're actually going to do this and the central questions particularly for anyone in intelligence the Defense Department is looking to harness the next level of artificial intelligence because they are curious about it and they if it's if it actually stands up they are anxious to work it into what they do in a way that's safe but the question is are we mistaking the judgment of this thing for human judgment and most importantly the data that's feeding artificial general intelligence would we be able to trust it and again we're talking about would we it's all very much in the future they're interested in this but general AI doesn't really exist yet certainly not in a form that's accessible to the Defense Department but there's signs that it's emerging and these are the questions that are defining for people that I talked to at the top of intelligence and in the top of the Defense Department these are the questions that they grapple with in thinking about how to use that or what to do with it their reservations if you will and it's important because when general artificial intelligence messes up you have a bad day as great example movie wargames what happens you have a general artificial intelligence whose job it is to advise the Defense Department it's a decision aid kind of Alexa like if you will the decision aid supposed to advise the Defense Department on the likelihood of winning a thermonuclear war if it's of the proximity of that war and well basically because somebody hooked it up to Matthew Broderick the data sets all all wonky because Matthew Broderick is the Soviet Union and he's just playing away in the suburbs and it goes goes very bad so data integrity we talk a lot about Dagwood data integrity here this is we aren't nearly done discussing figuring out the problem with data integrity it's hugely important especially as you get into the question of what you want real artificial intelligence that can apply you can apply to a generals problems what you wanted to do it's something that's everyone that I talked to both in intelligence and defense is keenly obsessed to it there is no solution for it right now there's some fun research activity it's very much an unsolved issue so the thing is that the military has a mandate for developing artificial intelligence that the rest of the world does not so we're talking about when we talk about uber or a big private company and how they might use autonomous systems in an urban setting basically replacing their human drivers with self-driving vehicles etc then we have a conversation about labor and all of these other things none of these concerns really apply to the military in the same way the military has to create have technological superior superiority over its adversaries particularly it's pure adversaries this is core to the new national defense strategy that they pushed out it's very cold war sounding russia china play a really key role right up front in terms of what we're developing technology against we know that there's a lot of research effort there and a lot of funding so they and also the tasks that the Defense Department would put on artificial intelligence are the dumb and dangerous tasks that you don't have to worry about there like displacing labor for like labor displacement judgment displacement is a thing but labor displacement isn't like an issue in the same way so the defense space is going to have a big future on the impact of artificial intelligence because it can bypass labor concerns and other concerns so it's an exciting time to be talking about this this is from November of 2014 and this sort of opens the new chapter on the Defense Department's approach to artificial intelligence the Pentagon announces that they are going to be pursuing a new what they call an offset strategy named after the first to offset strategies this is the third offset strategy and the origin of that is they're going to offset technological gains that are being made by peer nations by developing new super breakthrough super relevant technologies for like the year 2030 they're basically going to do kind of like little moonshot stuff is what the offset strategy was big focus of it was artificial intelligence and later on me and some other folks figured out that it's not just artificial intelligence it's this very particular approach to artificial intelligence that's really key for everybody to understand and that is human machine teaming keeping a human being at the top of the important judgment cycle is core to everything that the Defense Department is trying to do right now with artificial intelligence as opposed to just remove the human entirely from the important decision cycle so this comes not from Chuck Hagel nice guy absolutely but then deputy defense secretary Bob work Bob work who is now at sea Nass and was really core to the Defense Department developing its approach to artificial intelligence that they're undertaking today he read a book by a guy named Tyler Cohen called averages over it's on his shelf and in that book Tyler Cohen talks about a guy named Kenneth Regan Kenneth Regan found a new way to play chess that's better than robotic chess he found that when you take a human being and you combine them with a exceptional chess-playing robot then the resulting team beats both the best human chess player and the best artificial intelligence chess player Bob work read this and he said this is the way to go this is core to how we're going to approach the development of artificial intelligence and for the Defense Department the US Defense Department makes a lot of sense and number one keeps like human accountability within the chain of command they like this when somebody screws up they like to be able to say he's fired he's court-martialed he's in trouble etc like it's important to have human accountability especially for commanders that are you know have to integrate international partners and what they're doing and that's that's core to this and also it makes use of what the Defense Department sees as its most important asset which is these super trained individuals the amount of money and time that's poured into training a particular military member of military personnel today is enormous this the idea of human machine teaming to optimize the human decision-making through artificial intelligence that serves and like to support role underneath him that makes use of both the US need for human accountability and also the strength which is u.s. training and u.s. servicemen capable of independent thought that's something that the Defense Department sees is both a core obligation and a core strength and so Bob work developed a strategy for building developing and integrating artificial intelligence into military operations that keeps that idea at its core human machine teaming and it's based partially on what he learned about what was then called freestyle chess but that later became known as centaur chess where it's human plus a machine so that's key to everything that they're doing so also key to this bobrick would go out and he would give talks every so often where he would highlight the particular approach that the Defense Department was taking to artificial intelligence and contrast it with what he saw coming out of particularly China and Russia and also emphasized that the United States is not alone in attempting to develop to build to buy and to integrate this these emerging technologies into everything that they do at a military level so you can see things kind of moving along here in terms of our headlines this is the Armada it's a Russian tank that also has autonomous self-driving capability in it's a good time to highlight that in terms of how China or Russia employ artificial intelligence in a battlefield setting we really don't have any clue yet but we do know that Russian defense contractors like Kalashnikov has have advertised autonomous lethal activity or lethal capability as a feature in stuff that they want to sell to the Russian government and we also know that the Russian government very recently has said you know what we can't even define what a lethal autonomous system this is a paper that they submitted to the UN they said we can't define what a lethal autonomous system is so we're completely against any attempt to regulate or ban that right and and and and China said well whatever they said I don't really have this is true we're kind of open to what kind of agnostic decide that sounds about right so that is in contrast to the way the u.s. works we have a directive came out in 2012 written by it was then undersecretary think deputy secretary ash Carter later became the defense secretary that says that you need to have a human in the loop at all times when you're talking about kinetic activity when you're talking about actually taking a life so a human being has to be the one to make the decision to take a life that's in a doctrine that came out in 2012 it was renewable for a little while and has since been made permanent and again it like showcases the core of the way just the Defense Department operates it keeps human accountability in there important for chain of command also important for international partners to have their trust and it makes use of just the amount of time we spend training individuals to think independently and and skill them up to do that so they like to highlight this contrasts a doctrine is not a law it is a thing that can change but at present this is the Defense Department doctrine and believe me I've been raiding to write the headline four gazillion years Defense Department now says they're making the killer robots I've been pushing for it they're like this they're keeping this Madison particular likes this this doctrine though it is not a law I don't think it's going anywhere this remains really core to their approach to this space so what it then have they begun to do with with artificial intelligence in 2015 I wrote this story about special operations forces at the front line and attempts to fuse open-source information with a lot of map data with a lot of other coms to create a probabilistic scenario for areas that they were going into very tip of the spear type stuff and much of the way the the rest of the Defense Department is going to approach artificial intelligence is going to be based around that experience to a certain extent and here's here's what that is so the most recent joint strategy operational document it's classified it's unusual that it is but after it was classified the chief of the Air Force the chief of the Navy chief of the Army they all went out and they started giving talks about what they wanted to do for the future and all the talks made sort of like the same point we need to connect absolutely everything on the battlefield to absolutely everything else we need ubiquitous connection across the entire thing this is the only way we're going to make it past anti-access aerial denial defenses and that means big radars that's connected to big missiles and big drones that take off really fast that are gonna keep us out of an area the only way we defeat that is if we connect absolutely everything to everything else every drone every sensor on every soldier every sensor and every card that that soldier drives every satellite every jet every ship every sub etc in this huge web that is at once transparent to kind of like every operator at the front line that needs whatever information that can be provided from that web and certainly transparent from abroad from combatant commands from theater commands from Washington DC from Brussels from wherever this ubiquitous connection is is core to the way they're moving forward and most importantly the way they accept data into a much larger data collection sponge then I think has really ever been achieved by a single institution like on this planet because it's not gun it's going to be diagnostic data off of one platform it's really off of all of them plus signals intelligence potentially plus commands from from human operators or other pieces of intelligence all of these things that make up this nervous system it really is that it's supposed to be a big sensor it's supposed to suck all of this information that's potentially digitally collectable into a central funnel and then push it out as needed to where it needs to be and in order to do that task you need like in keeping with the vision that they have for it that's not something that happens without artificial intelligence at an incredibly high scale so this is a little to replay the video this is a little example of what some of that experimentation looks like about about a year or two ago 2015 I think so then actual abundance becomes it behind all that for some time certainly since the OIF OEF era has taken place that our adversaries are quickly catching outlaws so they're investing in the series we see them moving in areas that we haven't looked at since the Cold War everything we talked about Marine Corps on the maneuver warfare mindset is maneuvering faster than the enemy the operation tempo moving fast is what we try to focus on it's certainly gotten a lot worse where we were focused on fighting a technological primitive enemy and what we see now is the ante access problem has become to the point where we need to start coming up with new ways to do things when we think you take technology as part of the solution to that going forward the first one into the room should never be an air breather it should be a robot with lethal capability same thing coming ashore Thanks so yeah the first one coming first when the Schwartz should be a robot with lethal capability and that doesn't mean outsourcing the kill decision to that robot they're very clear about maintaining that but it does mean that the first one then that on the beach or through that door is a robot with lethal capability that happened at red Beach which in many ways is sort of like the absolute best case scenario for an invasion in the future because real combat is much more likely to take place in an urban setting that's hard that's full of complexity I mean you want to talk about an unstructured data set it's an unstructured data set that any human operator is going to have a tremendously difficult time making sense of so making sense of that is is key to this taking all of the data that potentially exists moving it into a place that is understandable to every commander to every operator in terms of what it is and what they do there's going to be forty four times as much diddle-diddle information in year 2020 as there was in 2009 and I this is a quote from IDC that came out a couple of years ago and it's a good forecast I think that it might be a little bit of an underestimation much of that is streaming data but it's still it's still data and that is basically the those are the breaks that the United States Defense Department is looking to use on top of a lot of other things to create next generation artificial intelligence capabilities there's a couple of projects that you've probably heard about there's one called project maven a partnership with with Google Google has since said that they want to exit the partnership out of because their employees many of the developer core has revolted against it and several have resigned I can tell you that the Defense Department everyone that I've talked to has said that that project has been an incredible success it's basically tagging and identifying objects and images and video footage so that you can cue an analyst to the portion of video footage or that they need to pay attention to and that is such a success that the Defense Department is actually standing up an entire artificial intelligence Institute that's happening later this year and in terms of this is basically they don't talk too much about it publicly but in terms of the sources that I've talked – about it on background that have been a part of that discussion what they want to do is replicate project maven which was an Air Force and Special Operations project just across the entire services come up with little problems that every service has perhaps every combatant commander has and figure out if they can sell them on an AI solution to that problem in order to get buy-in from the services and then if you have that then you're actually solving problems in real time as opposed to solving big theoretical things it's very core to the way what mattis wants to do and we're seeing that take fruit there's also a big program that you'll be hearing about later this year if you follow this space at all it's called data to decision that's an Air Force program as well it's incredibly ambitious and that's taking essentially every piece of data that might exist in the air domain or be relevant to the air domain which is kind of every piece of data open-source intelligence social media posts certainly plane Diagnostics certainly biophysical feedback from pilots certainly feedback from drones signal signals intelligence collection whatever else and turn that into something that can serve as a decision aid to a commander so and that implies virtually anything that you want to make of it that implies potentially running counter scenarios to a concept of operations that's currently being employed to see if that you know the thinking underlying that concept of operations is still relevant it's basically everything it's incredibly ambitious there at the very beginning experimental stages of that they're going to be moving out with more questions about funding that in funding to be awarded as part of that and you know you also see this huge move to the cloud and I can tell you that the Defense Department's move to the cloud both through the enormous Jedi contract and also all the other follow-on cloud contracts that they'll be sending out are very much related to that nervous system taking every piece of battle battlefield data that's relevant including open source intelligence that is collectible phone whatever and doing something with that that is relevant every commander relevant to Washington relevant to the person on the beach or the person in that urban warfare environment they like and mattis's keen on this too it has to be relevant to what's called the tactical edge and that means the guy kicking down the door walking into all unknown like–that's this cloud has to be relevant to all of them and it's a huge undertaking so with that we've got about 17 minutes for questions I was surprised that I went on so long I'm sorry but I would love to hear from you and love to hear your thoughts so questions about artificial intelligence the Defense Department or Defense Department tech in general or or anything shoot that's not thank you for purchasing a presentation I have a general question what role or what extent does Congress have in mandating AI and the Defense Department or even regulating it just there's a broad concept thank you there might be some aspect of funding the new AI Institute or funding different endeavors that Congress will have some some weigh in on potentially the House Armed Services Committee and the the Senate Armed Services Committee they are on board with all of this this to the extent that this increases lethality effectiveness has the potential to decrease cost I haven't spoken to a member of Congress that is interested in retarding the Defense Department's emerges into this area but I can only think of in terms of funding unless they pass some sort of hard law that makes law out of the Defense Department doctrine that's the only thing that I can think of no one is talking about doing that so there's really no meaningful congressional oversight of Defense Department acquisition or development of artificial intelligence that I can think of right now other questions yes from Rome they wait we heard that we had microphones what what are the skills that that are needed for this you you mentioned people will need to have some sort of skills and what are the resources that are either available right now or maybe will be available in the future it's a very good question very good question well you're all an AWS conference and well let me tell you them they really want this cloud thing really bad and so that's AWS understanding that and also understanding other cloud environments in a general way will be very important to that just in terms of understanding artificial intelligence at its core statistics hugely important just learn what a Markov model is and learn what learn who Thomas Bayes was like comma space is actually the guy in the background of all of this revolution that we're living through right now it's very simple algorithmic approach the probability that can be applied of just across all the other models that really permeate our life right now with the exception of like a handful and I would also say open source intelligence verification like there's this whole group of people that exist online now that are actually have become very good within community at understanding the difference between jerry-rigged satellite images and real satellite images and that's really core to one of the big Defense Department fears is that they're going to be feeding open source intelligence into some model that's going to be outputting like a prediction about a battlefield outcome is going to be outputting some advice and the adversary is going to be jimmying up all of that open source Intel to you know just be gooey very real fear because that's it's fake news is a real thing so that's that's a core I think thing to look into as well but mostly yeah learn database management and and and fundamentals of cloud computing statistics just general statistics will help you understand artificial intelligence like in a way that actually makes it feel relevant to you and I highly recommend it Andrew Ming has a course on Coursera that was very helpful on machine learning and before you take that retake statistics go ahead so AI use based a lot on having data available what sort of posture where investments has to DRD been making in in sort of like their partners and trying to get big big data on the cloud or they can be mandating that their partners put all their data on the cloud so we can build a AI on that that's a real that's a very good question it's probably not going to be coming up with a mandate for partners is kind of a quick way to lose a partner but I do think that they'll definitely be presenting value propositions to international partners particularly in NATO too in the same way the AI Institute is going to be going to different service components with a value proposition for hey give us your data and that's the main thing that they want give us your data and we're going to output a solution to the problem that you're experienced so I do think that you'll see that a lot sooner than you're going to say hey you know this exquisite platform that you have or you know how we're helping you out in this situation we really need a lot of access to your most closely-held information especially and we need to digitized and structured in a way that's going to be good for folks in Houston can you just do that for us like real fast that I see is further off but instead of a mandate I think you'll see an outreach for turning data into solution I'm not sure how great the military as an institution is in that but there's definitely folks within the military that are very good at that so hopefully they rise to the top of that effort okay more questions there we go wait here in the center I know no one can figure out which one yeah well in many ways everything that the Defense Department is doing right now in artificial intelligence is based on the at least talking point in the pretense of we have to counter their investment but in terms of how Russia might deploy an autonomous lethal system to a battlefield in which we might find ourselves that's a concept of operations concerned that I don't think it's it's a concern but it hasn't been that I've ever seen dealt with in a meaningful way yet it is a concern and it's also something to keep in mind here one of the things that we're talking about that the Defense Department has not yet solved for that I think is going to increasingly pressure them in their position and their posture of keeping at human-in-the-loop is the effect of artificial intelligence on big data and big data on increasing the speed of the observe orient decide and act cycle the OODA loop so just basically all operations on the battlefield become faster as they become faster they move further away from meaningful human control and this is something that I talked to people in the Defense Department about at the highest level they're aware of that that they're going to face increasing pressure from pacing and timing to outsource more human decision-making to autonomous entities to software to simple machine learning programs or perhaps something more exotic and that's in part because they believe that Russia and China is more likely to take that approach because it can accelerate the pace of operations so that's a real concern but it's not solved yes yep are there any usage of blockchain technology as part of this entire suite of EA developments for a little while the head of DISA what he's now the former head of DISA that his name was general Lyn was talking about blockchain is one of the potential this is the Defense Information Services Agency sorry it's further acronyms I've been in this job too long I know he was talking about applications of blockchain insecure compute for well I think that you'll see and then he departed and it was listed as one potential option among many for just securing communications I think that you'll see just secure set comms over you know using protocols that already exist like Madol and link 16 etc and also just better and better encryption that's less exotic then you'll see sooner than you'll see blockchain just because that's something that the contractors and the comms folks and everybody else is already used to dealing with so I don't see meaningful movement towards blockade right now but it's something that the blast guy that was in charge of that agency was willing to talk about a little bit so if you can provide a solution that's better than those then I think you've got a case ok yes nice suit by the way thank you very much yes appreciate the shout out and appreciate the the presentation yes is it fair to say your presentation was focused on kinetic warfare and one of the challenges if we're gonna have this speeding up of the decision loop would be around chemical and biological weapons which I think are back on the table many of the moral arguments against them of sort of disappeared in the last numbers of years so how do you see that sphere playing act that's chemical and biological weapons detection is I think also an area where you're saying some research interest particularly in the placement of ever more exceptional sensors closer to places where those might actually exist and then making sure that the feedback from those sensors is entering into the larger decision-making cycle but yeah there's a good money going into in very interesting research coming out of editor better chemical biological and also nuclear weapons detection and much more of that job importantly has also been outsourced to special operations forces and who were adopting and and innovating in the AI space before it in a way that other services weren't from a from a early stage so I do think that there's interest in making sure that the sensor at the tip of wherever that might be especially if it's some potential site of concern that that data is reliable that it's constant that it's volumous and that it feeds into a decision-making cycle for a commander as quickly as possible but that's just one of many potential threats and you're right I did focus mostly on the robots with guns because we had footage of it I don't know I went to a beach and that's what they were using so yeah all right over here I'm curious about the use of non-lethal weapons is there anything in the DoD policy that might talk about deploying those without a human decision-making process to find non-lethal weapon we're talking about like information warfare influence warfare no no I don't actually work in non-lethal so I don't know what they are but as a woman traveling the world I would say mace and Tasers and things that don't automatically result in the death of the person that's on the receiving end I know a nice big grey area the doctrine isn't that I can tell you that is I have yet to see somebody from the Defense Department or a contractor that was dealing with that with the Defense Department trot out their mace deploying drone but there are amazed deploying drones and there are concepts of operation that are being developed for them and I think it's an area of concern and I also think it's an area of ethical that's going to be ethically problematic and I think that I trust or believe or hope or pray that really smart and ethical ethically minded commanders especially those that are most interested in keeping pace with allies that they've roped into different engagement that rope that sounds bad but that have decided to show up with them to a particular engagement I think that there's going to be real hesitation on their part to deploy some aspect of like a robot with a some sort of I don't know a resting effect or something like that some non-lethal force effect against in a situation where they can't control who that adversary is but it's an area of complexity there is the non-lethal weapons Directorate right now they're mostly focused on jamming trucks that try to charge checkpoints this is a big area for them and also direct energy and direct energy is one of these technologies that you you want human control of and also it's easy to put a human in control of so that's like lasers potentially neutral particle beams and things like that so that's where I see most of the interest in non-lethal going is in direct energy and and and like jamming for vehicle stopping and bless you know sticking mace on drones but okay go ahead in pursuit of AI the more common tech you mentioned before was neural networks the issue with a neural network is when backward propagation fails its catastrophic you have to read t-shirt Tyrel neural network which is really slow have you seen any research coming out that will hopefully fix that issue well in theory this is why you wonder an incredibly resilient cloud environment that can only be provided by a massive you know commercial entity because then you can in theory you can duplicate what that neural network constructed and you can bring it back online in case there's a massive failure in theory resiliency is like really core to why the Defense Department once in some enormous cloud thing in the first place and I think that that is definitely an aspect of just general resiliency because if you're talking about your neural network going offline you're talking about all of your comms going you mean you're talking about a big potential problem that would affect not just that that decision aid but also potentially a whole bunch of other things I think one more things besides the resilience of the overall network also the actual neural network itself is the way that neural network learn something to reteach something it breaks neural networking pretty badly to a point where it takes forever to reteach the entire lessons from you know father grandfather great-grandfather all the steps yeah so is there any research coming out that you've seen that will actually address that issue new networking itself yeah there would be appetite for it like I said the question of whether or not the output that the neural network is giving you whether or not it's credible is of top concern both for intelligence and DoD because they have to make decisions about that that have higher consequence then folks in the private sector when they make decisions about whether or not to pay attention to in AI produced output or a neural network produced output but in in general I would say that resiliency is like a core concern for them but it's not something they fully solved like you can always get sort of better a better resiliency solution so there would be open to that sort of thing okay hi good morning hi I would like to know if the Defense Department is looking doing research for medical robots you might have a soldier who is injured in the battlefield a US Marine the medical team can locate – that's all your so I think it's very important you know that kind of development when looking into knock out everything I just could you see the Department of Defense looking into you know medical robots I think so you know who's actually pushing ahead on that much harder is NATO I recently sat down with general Mercier he's a French guy is like a really optimistic and upbeat French guys like the only one he's like the the emerging technology buyer for NATO in their next exercise that takes place in November they're actually going to incorporate what you just described they want to develop human out of the loop immediate medical emergency distribution they want to be able to detect from a soldier born sensor if somebody's hurt and then dispatch like an autonomous ambulance to go pick them up so they're actually NATO ins incorporating that into an exercise later in November and so that's at least one NATO like US military partner that's experimenting with that and I think you'll see more oh I think we're done Wow look at that it's like 10 seconds left folks thanks so much I really appreciate you being here [Applause]

No Comments

Leave a Reply