The last year saw generative AI go from a big research advance to something that directly impacts real people and products. The multidimensional interplay between gen AI and security means we must protect AI-powered workloads against old and new risks, and apply gen AI to solve security problems. The resulting collisions pose new challenges to overcome: 

People expect AI to deliver on the initial hype with enduring value. Organizations and users expect AI-powered assistance to be more than a simple productivity boost; it needs to consistently provide correct answers and insights, and maintain context to actually get smarter over time. 

Foundation Models aren’t enough on their own to solve many real-world security problems. Organizations and practitioners have realized that tackling most tasks requires using multiple—often specialized—models, planning, orchestration, extensions, external memory, and other techniques together. Some problems may require open-ended, long-running agents that can run indefinitely to advance specific goals. 

Google Cloud’s Product Vision for AI-Powered Security

FILL THE FORM BELOW

You have been directed to this site by Software Insider. For more details on our information practices, please see our Privacy Policy, and by accessing this content you agree to our Terms of Use. You can unsubscribe at any time.