New to Defense Mavericks? Start here
June 11, 2024

AI Spotlight: Use Cases in the Department of Defense

AI Spotlight: Use Cases in the Department of Defense

This week, Bonnie joins the GovExec’s AI Spotlight Series hosted by George Jackson for a roundtable discussion on the DoD’s role in AI with Tyler Sweatt from Second Front Systems, Jason Preisser from DARPA, and Alexis Bonnell from AFRL. Together, they explore the transformative potential of AI, focusing on AI adoption, process innovation, and the cultural shifts that need to happen within the DoD to leverage AI capabilities. Tune in for an insightful conversation on how AI can revolutionize the defense space, if we let it.

TIMESTAMPS:

(4:25) How do we increase acquisition speed?

(6:02) The unsexy reality of government’s use of AI

(8:49) Why the department still operates from an industrial age approach

(12:13) How to cheat time in procurement

(18:10) Why we need to focus on governance and attention as a currency

(20:05) Three key decision-making traits to push innovation forward

(27:22) Integrating AI with mission and people at scale

(30:20) Why the DoD needs to guide industry

LINKS:

Follow Bonnie: https://www.linkedin.com/in/bonnie-evangelista-520747231/

Follow Tyler: https://www.linkedin.com/in/tylersweatt/

Follow Alexis: https://www.linkedin.com/in/alexisbonnell/

Follow Jason: https://www.linkedin.com/in/jason-r-preisser-05878220/

Follow George: https://www.linkedin.com/in/george-jackson-70b37b2a2/

CDAO: https://www.ai.mil/

GovExec: https://govexec.com/

Transcript

George Jackson [00:00:00]:
Hello, and welcome to artificial intelligence. In the public sector, AI spotlight use cases in the Department of Defense. The third episode in our second season, produced by Gov Exec TV in partnership with the National Academy of Public Administration. I'm your host, George Jackson. As we continue our series on AI and the public sector, we want to highlight some federal agencies, the role they play in the emerging AI landscape, and some of the ways AI is already being utilized in their day to day work and larger mission activities. Today, let's spotlight the Department of Defense. Let's kick things off with Jason Pricer. He is director of the Mission Services Office at DARPA, the Defense Advanced Research Projects Agency.

George Jackson [00:00:48]:
Sir, welcome.

Jason Preisser [00:00:49]:
Thank you.

George Jackson [00:00:50]:
Bonnie Evangelista is acting chief digital and AI officer for acquisitions for the Department of Defense. Ma'am, welcome.

Bonnie Evangelista [00:00:58]:
Thank you.

George Jackson [00:00:58]:
Tyler Sweat, CEO of second front systems. Good to see you, sir.

Tyler Sweatt [00:01:02]:
Good to see you. Thank you.

George Jackson [00:01:03]:
And back glutton for punishment for a second season, the chief information officer and director of the Digital Capabilities Directorate at AFRL, Alexis Bonnell, Napa fellow, welcome back.

Alexis Bonnell [00:01:17]:
I couldn't stay away, George, so kick.

George Jackson [00:01:20]:
Things off for us. Before we went on the air, you mentioned you want to avoid an intellectually lazy conversation around AI. What does that mean?

Alexis Bonnell [00:01:31]:
Yeah, I mean, I think that there's been a lot of really necessary and critical conversations around AI. So whether those are critical things like our intentionality around trust or ethics or any of those, but what really I get excited about, George, is how is this technology related to our mission to our people? And what we're excited about at AFRL is really this idea that it introduces the ability for us to have a fundamentally different relationship with knowledge and quite frankly, at the speed and the scale that we need to in this day and age. And so, you know, when I think about all of my work as CIO, but especially AI, this opportunity for my people to be more curious than ever, right, to be able to harness at speed more information than ever, you know, to be able to ask questions more naturally and kind of, you know, and come to the space and to the mission, you know, just with more curiosity. And to me, I look at it like, what we really have to do now is if we think about it from an adversarial context, we gotta ask and answer, what if? Faster than the adversary, right? And so I really think that, you know, if we think about AI, we have the conversation around what's the relationship with knowledge we want to have? That becomes a much more interesting relationship and, quite frankly, much more about the people and the mission that I really care about.

George Jackson [00:02:48]:
Could you continue, Jason, to demystify artificial intelligence a bit? I mean, the conversation to Alexis Point has taken on kind of this mythical proportions aspect, and at its root, it's just math. Right.

Jason Preisser [00:03:03]:
Well, so I think for DARPA, the main thing that we're focused on is increasing velocity through the process, from ideation to getting performers or contractors on contract and doing that national security research. So the way we're looking at AI from a business process, automation, AI and machine learning perspective, as well as leveraging large language models or small language models, is to where in that process, or where in that critical path can we increase that velocity? So that's where we're holding kind of workshops with all the functional leads, whether it be contracting comptroller's office security. It all have a role in getting those performers doing that national security research. So as I think about AI, it's simply that where and how can we leverage this new artificial intelligence tool that's out there, or whatever you call it, a knowledge base? It could be many different things to help increase that velocity, to increase our productivity and move things faster through that chain.

George Jackson [00:03:57]:
Bonnie, how do you look at that? Increasing velocity starts often with acquisition. One thing that you mentioned before we went on the air was that end user practices are very different in artificial intelligence. Could you connect those two elements, like, let's increase velocity, starting with acquisition. But how do you think about the end user while you're doing it?

Bonnie Evangelista [00:04:23]:
Yeah. What I liked about what Alexis was saying was in her setting, which is what I would call a lab setting, where. So the big, big thinkers like DARPA, they're trying to imagine the future. But I think now with AI, all of us are empowered to imagine the future. So every practitioner in any functional lane needs to provide, needs to have that space, to have a different relationship with knowledge and imagine what would make my life better. How do I reduce friction, or as Alexis likes to say, how do I reduce toil in my life and then go after, and then I need acquisitions or I need support to get after that. The entire, everything AI is there to do has to be informed by the end user, the functional practitioner who at the end of the day, has to do the job like that has to come first. That focus has to be there.

Bonnie Evangelista [00:05:13]:
I think a lot of us are trying to put that focus there. It requires a huge mindset shift, though, because a lot of our systems are not built to or designed with the human design upfront.

Alexis Bonnell [00:05:28]:
Yeah, I mean, I think one of the things I want to call out and just commend Jason and Bonnie on, is really taking that technology to the person. Right. How does this make my life easier? How does it make me more? So Jason's effort to kind of say, well, let's look at that individual level, at the team level or the mission level or the practice level, but with people and say, no, really, how does it reduce toil? What would you like to be able to spend your time doing? I actually think in many cases we tend to lump all AI together. Right. And the reality is that it may not be sexy, but I think the place that government is going to make the most headway is actually in that business process, in the administration, in the acquisition, because the relationship with knowledge there, that's the bread and butter of public service, really. And so there's going to be lots of incredible places. We have autonomous wingman, we have other places that operational impact of AI. But what I love is the fact that we're getting to this point now where it's really like, let's be realistic about what we do in government and let's do it better and let's empower our people.

Alexis Bonnell [00:06:33]:
And most importantly, one of my takeaways and why I'm so excited to hear the business case kind of get more glory is that my finding is that toil eats purpose faster than mission can rebuild it. And so, like, you know, anyone who's used a generative AI tool, there is joy, there is flow. When you get to exercise curiosity right, when you get to kind of nerd out on something, you know, for us or to get to realize you didn't have to do something the hard way. And we should really revel in wanting our public servants to have those moments of joy. Those are moments of purpose.

Bonnie Evangelista [00:07:12]:
That's interesting, because, again, mindset shift. I think a lot of us, especially in the department, are used to being told what's good for us, and now we're asking you to tell us what's good for you. And that's huge shift and very exciting. That does create a lot of passion and empowerment and that I think a lot of people are not used to. My team has been used to it for a few years now because we were, that was our mission. We had to go after this, but now we're trying to proliferate that in a way where other people can start to feel what Alexis is talking about.

George Jackson [00:07:44]:
Tyler, I'd like you to maybe personify this for us a little bit. I mean, I mentioned CEO, second front systems. You have such a unique background to kind of move into that position, talk a little bit about how you got there and how business processes that Alexis calls out right now are part of your portfolio or focus these days.

Tyler Sweatt [00:08:09]:
Yeah, I think there's two parts there that I'll try to sort of briefly touch on. And one, I think is a synchronization challenge. Right. Like, if you think the main difference between private sector and public sector, especially as you're earlier stage or venture backed, is like the relative value of a minute, right. It is. The most valuable thing in my world is I find ways that I can buy back time to either make things go faster or make things take less time that is embedded hard coded in the DNA corporately. You then look at the department that is still operating in an industrial age approach where the system four requirements. One is disaggregated from users.

Tyler Sweatt [00:09:01]:
I've got all of these different people. It's somebody's job to take the requirement from the user to the slide maker, and that it's someone's job to take the slides to the slide reader. And by the time it gets up into JCIs, JROC, or all this, we're on these ten year acquisition cycles where the things we're talking about are happening in seconds, fractions of seconds. So there's a synchronization problem there. And then I think from a, how do you think about speed and scale and resiliency and sort of like organizational lattices? But I think Jensen at Nvidia is like a really good proxy. When you listen to him talk about leadership philosophy, it's all about removing information asymmetry. And he's like, I will give the same answer in the room with a college intern as my C suite. And what that does is it just sucks the air out of toil, because most of toil is imperfect information.

Tyler Sweatt [00:10:02]:
Maybe a little bit of like organizational politicking and jostling, and then if you just assume positive intent, they're taking imperfect information and trying to extrapolate that down. That's a relatively simple problem to solve. It is a massive cultural transformation.

Alexis Bonnell [00:10:20]:
And to follow that, I mean, time is one of those things that we talk about. But I think to Tyler's point, we don't appreciate our relationship with it. And so coming into, back into DoD, what I was struck by was, what would it look like if we treated time like a weapons platform? Right? Like, what would it look like if a minute was as important to a missile, to us? And I think one of the things that happens is that, especially in times of peace, as people generally have maybe a low relation or low tolerance for risk. And so we fill those minutes. It's just an hour. It's just a form, it's just a training. It's just this and that. And it just starts to be more than the mission.

Alexis Bonnell [00:11:05]:
And so I think for us at the Air Force Research lab, the way I talk about it with my team, and I think AI is critical here, is how do we create wormholes, right? Meaning, if this is our current state and we want this to be our future state, how is it that we respect time, right? How is it that we wield it as a weapons platform? Because the interesting thing is, when conflict really happens, to Tyler's point, we're going to throw all that out the window. We've seen, and even from a timely lens at this recent bridge collapse, the economy of this area and the tragedy of this is going to be, are we going to leave all of the government protocols and regulations in place, or does that bridge need to get back up? I think what's really interesting about AI and the relationship with knowledge at speed is what does that allow us to do now, right, as we ask, what if faster, right, if we hear from people, what is different?

Bonnie Evangelista [00:11:59]:
I'll give you a boring. To your point about, like, us getting the boring stuff right is going to take us to get to the wormholes like you're talking about. So my team, we started with that in mind. In hindsight, we didn't use that language, but we were trying to cheat time. We were like, we would imagine things that people would say were seemingly impossible in a functional area like procurement and acquisition. And we started with procurement strategically because that is arguably one of the most critical or criticized parts of the process. And we said, how do we do a same day award? How do I award in hours or minutes? And again, maybe the goal wasn't to get there. The glory doesn't happen overnight.

Bonnie Evangelista [00:12:44]:
But then you start to reduce friction. You start to figure out where you have imperfect information in the process, like you were talking about. And maybe we didn't get to a same day award, but we got to a 30 day award or a 14 day award, which is a how I heard someone in the Air Force got an eight day award like it. So it just. It starts to build and build, and so the practitioner becomes the owner of the process rather than being, you know, maybe victim or subject to the process being subjugated to them.

Jason Preisser [00:13:14]:
If I can jump in and just kind of pull that thread a little bit more on the time continuum, because I think at DARPA, it's unique that we make time, the forcing function by putting every pm that comes to DARPA is on a three to five year term assignment. So they've got an expiration date on their badge that stares at them every time they clock in and out of the building every day. And so that drives a culture at DARPA, especially on all the mission services side of the house, where we kind of do all the enabling functions that creatively enable those PM's to get done what they need to get done. And so that's kind of our starting point is a lot of the career staff aren't necessarily on term positions, but those PM's are. But we got to move at the speed of relevance to meet their demands. And so one example of kind of where we're applying that is we've got a lot of on my staff, a lot of professional security folks that are embedded with our pm's that do the horizontal protection mission in classifying our different levels of research from the secret collateral or fundamental research level all the way up through special access programs. And so we're actually looking at how do we apply an AI solution to how they identify critical program information across a data set of security classification guides across the department and how they can do that much more quickly. But the important part of that is it's flushed out a process.

Jason Preisser [00:14:25]:
So we did it as kind of a pilot or experiment. But I think that process piece of where and how you engage the user, the subject matter experts in security, to build an AI tool that will actually make them go faster. And that's exactly what we're kind of doing with our security classification guide process, where the security professionals who are sitting with the program managers figuring out at what level of this research does it need to be classified. They can now use large language models to scour the security classification guide databases, identify CPI, and then the model will actually generate a draft security classification guide. And then at detail in. I think another important aspect of time is measuring how much time you saved when you're talking about things in minutes, what's your key performance indicator that demonstrates that you're actually successful at leveraging that tool? Because it's not the end all, be all for everything. And so as part of this, we're looking at how do we engage those users and set up a process where we can look at it from early engagement with them on defining what and how we can improve to measuring how well we did.

George Jackson [00:15:29]:
Let's stay on this, Tyler. I mean, we've just heard two use cases in the Department of Defense. Would you like to call one out, too? Something that you're really proud of or think speaks to our conversation here.

Tyler Sweatt [00:15:41]:
I think there are a couple initiatives and conversations that I've seen recently that I think are really, really exciting. And I'll frame this with, when we talk about the challenges of AI adoption, I think it's important for us to think about that in the same lens of the challenges of cloud adoption. If we're going to say cloud's adopted, everybody fought that. A couple large organizations were able to drive that, at least through the organization. The department does not have a good track record with technological or conceptual adoption. It transmits really well and says, we're interested in X or we're interested in Y. I think where we're seeing really interesting conversations start to take shape, are around user experience, and user experience as it translates into policy. The question that doesn't get asked is, as we walk across all of these giant placemats of process and reports and boards and all this great governance, how, why, who, and bringing that down by Persona, by journey, by step, there is a massive efficiency drill.

Tyler Sweatt [00:16:58]:
And the conversations that need to get that started are happening now in louder pockets than they were. It's not a one off in the corner here. It's not a one off in the corner there. So that aspect gives me a bunch of hope for what sort of comes in the next cycle department wide, because that'll be a cultural transformation. And then the second one is sort of a mashup. And it's not just because she's sitting next to me, but if you two years ago been like, here's what tradewinds is going to be doing from a procurement standpoint of emerging technology, like, at scale and the departmental embrace of it, nobody would have believed you. And if you told me two years ago that DIU was going to get an $800 million plus line in the NDAA, nobody would have believed you.

Alexis Bonnell [00:17:46]:
Right.

Tyler Sweatt [00:17:46]:
Because that's an ability for us to deploy capital and remove friction from procurement pathways specifically targeted at new and emerging capabilities, technologies and providers. That is actual progress and meat. It's not just words, and it's not just sort of like billboards of, hey, you know, come send us your new technology. They're putting muscle behind it. It's pretty exciting.

Alexis Bonnell [00:18:10]:
Yeah. I mean, to follow that. I think one of the things that I hear you saying, Tyler, is it's also about people. Right. It's about, you know, and all great public service and great governance is ultimately about people who are like, that can be better, that should be better, right? And you start to see that manifest and it becomes more believable, right. That it can be better, it can be different, you know, but to follow maybe both, you know, Tyler and Jason, you know, I think one of the things that I call it is like, you know, if we think about time as a weapons platform and we think about that intentionality and we kind of think about never being satisfied, how can it be better? You know, then Jason calling out like, really, what is that toil calculation, right? What does it cost me to, you know, to add another step to the RMF process and what does it gain me to take it away, right? And I think what's really interesting is the opportunity we have right now because of the speed and the information that AI can put at our fingertips, we have to kind of realize we're trading in attention. Like attention is literally the currency of the modern world, right. There's a reason for me coming from Google, there's a reason you become a nation state actor when you are dealing with attention and that's going to be true and continue to be true.

Alexis Bonnell [00:19:21]:
National security and normal economics. But I think for we as public servants, when you see these types of leaders saying, okay, what does it cost us to not do something, or what might it give back to our people to do it in this way? And I think that what I'm hoping to continue to see more of is not just because it also helps us, I think, make investment differently. Right. It's one thing to say you should be doing some AI. It's another to say, I'm going to get 5000 hours back to my procurement people and guess what? That's worth x to me. And I think that intentional calculation is where we're starting to get to. But even more so, I think we have to recognize, and this is true for most of us in innovation, that doing nothing is also a risk. Right.

Alexis Bonnell [00:20:05]:
A lot of times we try to make the case, there's this assumption that how we do things now isn't risky. And I think that to that idea of where do we need to be? And a lot of us don't even know. I cannot tell you six years from now what the technology is going to be or what the adversarial context we're going to be looking for. Kudos to anyone who thinks they can. Right? But we have to have that curiosity. But we also have to say, well, what's it going to cost us? What is beneficial? And I think what we find when you break down that math is it actually becomes a lot less anxiety creating. It's like, well, you know what, 5000 hours of procurement time back is not only worth x as far as the salary calculation, but it's worth y as far as getting things faster, right, making, you know, making decisions and moving. And so I think, you know, I'd like to hope or I'm predicting that, you know, you know, if you just see the leaders that are represented here, that intentionality is starting to come into play.

Alexis Bonnell [00:21:00]:
And I think, you know, that opportunity for AI to let us have a different intentionality, to let us understand more about the why we're making these investments is really the era that I think we're entering.

George Jackson [00:21:12]:
Yeah, I think it's a good time to introduce something that you said before the cameras started rolling because I think it speaks to exactly what Alexis is talking about here, around people asking the right questions, using AI as a vehicle to identify gaps that you need to address. Could you talk a little bit about how you or how your team query.

Bonnie Evangelista [00:21:41]:
Right now, learning is done in a couple different ways. I'm going to speak to my functional lane, but I would imagine it looks very similar in most functional lanes. You either have an institution that provides content and you learn from the institution, or you have experiential learning on the job training. You are put in situations based on your mission where you have to figure things out. Myself and a lot of others on my team have gotten to the point where we're at because of experiential learning, what we're trying to offer. And this goes back to where you started having a relationship with knowledge. Alexis had an interesting concept where we have to have this or we have this need to know what's behind the tech or behind the black box. But we should consider ourselves as a black box.

Bonnie Evangelista [00:22:29]:
No one understands how we as a human thinks and no one understands how I make decision A or B. But with tools like AI, you can get some exposure and understand from a knowledge management perspective where people's knowledge gaps are. So in terms of change management or upskilling people into a lot of the things that we're talking about, because that is also key to this transformation that Tyler talked about, is you can't just throw the tool out there and expect people to use it because you think it's a good idea. The user experience matters, but also making sure people understand. If I'm expected to be querying in order to have a relationship with knowledge, do I understand how that works? And is there new friction because I have to learn a new skill, essentially. I have to trade in old skills for new skills. And I have lots of opinions on this because some of the criticisms are AI is going to make you stupid. And I would argue that's not terribly, that's not quite the case.

Bonnie Evangelista [00:23:31]:
It means we're exercising different parts of our brain and we have to think differently, potentially. So all this color and context to say, as an example, if you have a tool that is, that you can query a knowledge set, but now you have a way of tracking and understanding where people's gaps are because you know what they're asking. And so now you can provide content or you can address areas where you would like to see different actions or behaviors because of the different type of questions people were querying. We can track that now. We know that it's transparent. We've talked about this, Jason, as I.

George Jackson [00:24:08]:
Look around this group and the past guests that we've had on this series, titles are changing. There's a lot of digital, there's a lot of data, there's a lot of AI in the titles that I'm seeing for very senior department of Defense leaders. Connect those dots for me because data is really something we haven't called out specifically thus far, but it's central to this conversation as well. How do you use it? Where are the gaps? How do you interpret it?

Jason Preisser [00:24:42]:
Yes, I think from a DARPA perspective, when we're looking at business process innovation, I think the data is still in its infancy and we're trying to look at what data is available. So the example I gave before that was leveraging an existing database. So it was already curated data that we could easily or fairly easily apply large language models to really make that a more robust database and tool for professional security staff. So I think we're still learning as to what kind of expertise we need and where we need to place that within our organization. So DARPA is very small. It's only about 240 government folks. That gives us an advantage that we can be very agile and nimble as to how we approach data. But I think that's one thing we're going to have to really analyze over time, is what data sets are we going to need to use and how much curation is required to make that useful.

Jason Preisser [00:25:36]:
So we're still kind of formulating what that governance structure looks like, what those positions look like. Do we need the data analytics persons? Do we need the AI? Do we need the process automation AI type folks? And we're just starting kind of new programs. If you will, for kind of developing those subject matter experts from kind of the ground up. So DARPA really never had internships, if you will. So we're actually looking at leveraging kind of different scholars programs to bring in business process automation AI folks, or just recent kind of master's graduates to start to develop those skill sets from the ground up and really embed them, like I said before, with those subject matter experts across the functional areas and figuring out where we can tease out these process efficiencies and to save us time going forward.

Alexis Bonnell [00:26:20]:
That's so cool. I mean, I think one of the things that it also makes me think about is oftentimes people ask me a lot, oh, you're a woman in tech. There's not as many of you as there should be and things like that. And I think generative AI is a really interesting moment for us. So you asked kind of, Bonnie about queering and Jason's point about kind of interns and bringing in new folks. One of the things that I always try to look at is, what is the human experience? And recently, I've been playing around a little bit with seeing the quality of the queriers, and everyone assumes it's called prompt engineering. So everyone assumes that you have a coder. Yeah.

Alexis Bonnell [00:26:57]:
New term, prompt engineering. And that you've got to be a coder or a data scientist. You have to have that, and that's what you've got to do to be good. And quite frankly, what I'm finding is that the lawyers are phenomenal, the PR people are phenomenal, because words, you know, words and thinking conceptually in words, and natural language is their natural state. Right. And so I think what it's really introducing for me, a lot of people don't know. I'm a PR and advertising major. How did I end up being the CIO, right.

Alexis Bonnell [00:27:22]:
Of AFRL? But I think it's this magical moment where everyone is relevant to knowledge, right. And as you introduce a tool that makes it more accessible for someone to exercise, you know, their expression, not encode, but in words. What an amazing kind of power, secondary power, third power, to be able to bring more people into being part of this digital leadership and digital public servant. And so I think it's going to be very interesting to see what does, ten years from now, who's considered a technologist, and based on this technology, becoming easier and really, quite frankly, more inclusive. I think the other element on the data side that is really interesting is I think we're having a moment, Alexis opinion of, like, data identity and what I mean by that is that typically, our relationship with knowledge and data and government has always been one of control, right? About to perfectly clean it and structure it and organize it, you know, it's got to be a hundred percent. And I actually think for many data officers, there's a lot of anxiety rolled up in that perfection, right? And kind of, is it good data? Is it not good data? And so I think it's actually, we should recognize that it's a change for us to say, well, what is it? To put all the data on the table, right, to actually unlock all of our treasures, structured or unstructured? And what does it mean, then, for us to shift our mentality that, to make choices? Meaning for some things, if we're doing an audit, we probably want two plus two to equal four. But if we're going to ask big what if questions, I don't just want the controlled information. That's not enough of our knowledge treasure, right, for me to do the best of my job.

Alexis Bonnell [00:29:05]:
So I say that because I think there's a, you know, for myself as a, you know, as a data lover, there's an identity like crisis of me being able to. Need to move from having a controlled relationship with data to a curiosity or catalytic relationship with data. And I think we shouldn't discount the change we're asking people to do in their values and in their mindsets as they exercise that.

George Jackson [00:29:29]:
I'd like to close Tyler with you, but before I do, I'm going to have our audience just triple down on that term, prompt engineering. Maybe there's a little graphic that can pop up underneath here. Tyler, we've kind of danced around this thematically, you know, velocity, this culture clash, or use a different term, between DoD and Silicon Valley, speeding up the acquisition process, leave us with one nugget of wisdom to sort of improve that aspect of things. What should DoD leaders in artificial intelligence do right now to kind of reach those larger goals?

Tyler Sweatt [00:30:20]:
It's a really good question. I think my parting sort of advice would be, DoD needs to impart some opinionation on how they want industry to come. And I don't mean, hey, come to this innovation office or this, but at a technical. And the relationship between departmental data and infrastructure and commercial software, what does that mean for data access? What does that mean for intellectual property? What does that mean for derivative products, especially as we're talking about models and ensembles and how they actually behave in the wild? There's an opportunity for the department to extend that opinionation that will create technical guardrails for industry to come in and meet the department how it wants to be met versus the way it's happening now, which is more of a there's 70 different policies that are all in conflict with each other. You get different answers based on who you ask questions to. That's a really difficult sort of terrain to navigate as a private company. So that would be my kind of parting herding gift to the department.

George Jackson [00:31:35]:
Tyler Sweat, CEO of Second Front systems Jason Pricer, director of mission services office at DARPA, the Defense Advanced Research Project agency Bonnie Evangelista, acting chief digital and AI officer for acquisitions at DoD and my esteemed co host Alexis Bonnell over at Air Force Research Laboratory. Thanks everybody for being here.

Bonnie Evangelista [00:32:00]:
Thank you.

George Jackson [00:32:03]:
And thank you, our audience, for tuning in. I hope you're as excited as I am to continue exploring AI in the public sector. Until next week, I'm Gov. Exec TV's George Jackson. Have a great day.