News in English

A tech leader at Accenture shares how AI and cloud tools in the public sector can evolve with regulation: 'It's still the Wild West'

  • Tom Greiner, Accenture's lead for health and public-service tech, helps clients improve their IT.
  • Greiner opened up about the evolution of cloud technology and generative AI in the public sector.
  • This article is part of "Build IT," a series about digital-tech trends disrupting industries.

Tom Greiner joined Accenture, an international leader in information-technology services and consulting, in 1988 as a director of technology in the company's federal-services division. He spent 34 years in that role, which allowed him to work with a wide variety of public-sector clients.

He has hands-on experience managing complex projects like US-VISIT, a system deployed by the Department of Homeland Security to handle biometric-data collection at border crossings.

In 2022, Greiner became the senior managing director of Accenture's division for health and public-service technology. Using his experience with implementing technologies for US federal services, he helps clients in the commercial public sector and health employ complex information technology, including cloud tools and generative artificial intelligence.

Greiner spoke with Business Insider about digital transformations in the public sector and how they might evolve.

The following has been edited for clarity and length.

Let's talk about cloud technology. Can you explain what hybrid cloud means and its significance in the public sector?

Many of our clients still reside in their on-premise data center, or they use a private cloud, which is somebody else's data center but not located that far away.

However, we've seen more clients start to embrace the cloud. COVID was an accelerator for that. Dollars and cents is also a driver. There's a need to treat the taxpayer dollar respectfully. Using the cloud for things like backup and recovery or storing system and security logs is much cheaper.

As more clients have built digital-facing capabilities to interact with citizens, they've seen that that build is typically done in some sort of cloud variant. Many of them, aside from federal, are focused on a single cloud provider.

But in federal, we see many multicloud and hybrid-cloud environments. They'll have data sitting in a data center. They also have some in a thoughtful hybrid-cloud environment, leveraging the cloud for certain tasks but not everything.

A few agencies are 100% cloud. They're even dabbling with polycloud, meaning one workload is hitting multiple clouds within the workload. Generative AI is also an accelerator there. We're seeing clients dabble in different foundation models that sit in different clouds.

What concerns do stakeholders typically raise when implementing the cloud in the public sector?

In Europe, there's a data-sovereignty concern. Where's the data? Where are the people who are touching the data? Where are the people who have access to the data center?

In the United States, and in state and local organizations, clients want to know not only if the cloud is secure but also how a cloud service will secure their workload.

Clients are asking for a managed SOCKS service and active threat hunting. Depending on the nature of the data, we've had clients ask for what we call moving-target defense. We can set up a cloud environment, and it will run for four or six hours. Then, we shut it down and set it up somewhere else. If an adversary gets into the environment, their ability to understand it is more limited.

How does government regulation impact the public sector's use of the cloud differently from the private sector?

There's still confusion in Europe because countries have their own opinions on what the General Data Protection Regulation, or GDPR, should mean, and there's not a uniform point of view on what is right to meet the specifications.

Each country is coming up with their own interpretation, and sometimes they're looking to the first mover for guidance. So if Germany went first, another country will look at that and decide, yes, that's good enough or no, we want more.

Getting back to the United States, I think a helpful move was the establishment of a minimum set of federal security requirements through the National Institute of Standards and Technology. It's a commonly understood standard that commercial entities look toward.

I think the last couple of federal administrations have wound down their emphasis on the NIST Cybersecurity Framework as something they need to keep current, and that was a surprise. The implication is that security, like beauty, is left open to the eye of the beholder, and individual variances are increasing friction and cost in the market a bit.

A lot of industries would be a fan of the administration again stamping the NIST Cybersecurity Framework as the go-to for being good enough to do business with the US federal government.

What's the current demand for AI, particularly generative AI, in the public sector?

For AI in general, I'd say it's high and has been for a number of years. With generative AI, I think there's both optimism and caution. 2023 was a year of small experiments and exploration. In 2024, we needed to think about it institutionally.

We've worked with states and localities on setting up Centers of Excellence to understand different models, provide coaching on responsible AI, and exchange best practices and reusable capabilities across agencies.

Cloud went through a similar process, and I think lessons were learned that are now applied to generative AI. Cloud started as a bit of a free-for-all, and then common infrastructures and security capabilities came later. With generative AI, organizations are trying to solve citizens' problems but also to start with direction on how to do it in a fair and equitable way.

Can you provide examples of how generative AI is deployed in the public sector?

We completed a program with the District of Columbia Department of Health that created a chatbot for citizens and staff to ask questions about services and program qualifications. That was our first implementation of Amazon Web Services' Bedrock using a retrieval-augmented-generation solution.

We uploaded their documents and website information into a closed model to guide citizens' questions. RAG is common now, and in the last three months, we've added knowledge graphs for better contextualization and correlation of documents.

Generally, I'm still seeing one-off use cases. It's about experimentation, and not about an organization-wide fundamental change. The public sector doesn't have the competitive pressure that commercial does to reinvent, as it's sort of a monopolistic entity by design.

However, budgets are tight. If public-sector organizations can free up staff from mundane activities, that's important. It's big in public health, for example, where there's a nursing shortage across the globe. Helping someone coming in for a surgery to ask all the questions they would want to ask in a conversational sort of way could probably off-load 80% of the patient prep and free up a nurse for other tasks.

Are general attitudes toward generative AI receptive or cautious in the public sector?

I'm seeing that people are encouraged. That's because agencies are picking off use cases where they're confident they can deliver immediate value and build positive momentum. They're going for quick, easy wins, the parts of the job that people hate to do.

It works best in organizations with a responsible AI framework in place or that are working with providers that bring that as part of what they do. However, it's still the Wild West, so many organizations can do what they want, and use whoever they want, and get whatever results they're going to get. I suspect we'll see a mixed bag.

Still, I think there'll be enough success that AI will continue to permeate and cross-pollinate across states and localities.

Read the original article on Business Insider

Читайте на 123ru.net