AITech Interview with Kevin Bocek, Chief Innovation Officer at Venafi

Hello Kevin. We are very excited to have you onboard. Could you tell us about your journey that started in cybersecurity and now has led you to be known as a renowned author?

I started my career journey as a developer and found myself working most often on cybersecurity issues. Applications needed to authenticate users and their inputs that I was building – what became more than the application was the security behind it. I now have more than 25 years of experience in cybersecurity, working with industry leaders like RSA Security, PGP Corporation, IronKey, CipherCloud, and Xcert. As the Chief Innovation Officer at Venafi, a CyberArk Company, I head up machine identity security for workload identity, Kubernetes, and artificial intelligence. I also lead CyberArk’s technology ecosystem and developer community, ensuring we future-proof our customers’ success.

You have 25 years of experience, working in Germany as well as the US. What significant changes have you seen and implemented to sustain the evolution of the technology in all these years?

We’ve worked to get ahead of the attacker – always looking for ways to authenticate users, keep private data, and now most importantly, authenticate every machine from code to the cloud.

AI is on the rise and has been mitigating most of the tedious tasks, but we know it definitely comes with its own set of risks. What specific risks have you seen on the rise from AI generated codes?

New AI technology – from AI agents to AI coding assistants – creates new opportunities for attackers to authenticate at machine speed and also creates uncertainty about the source and integrity of code. Recent research underscores a growing challenge: 83% of security leaders report that developers are using AI to generate code, but 66% find it difficult to keep up with these rapid technological advancements. Additionally, 92% of security leaders expressed concern about the risks posed by AI-generated code.

If the machines are here and security professionals are so concerned, what do we do? Humans and machines have at least one thing in common: they both require identities. We use machine identities to identify machines running and communicating and use code signing to authenticate code from open source. All of this allows us to use the internet, install apps on mobile devices, and fly safely on today’s latest digital aircraft. Applying these same machine identity techniques – when secured – solves the challenges that AI agents to AI coding assistants will present.

On that note, tell our readers what AI ‘Kill Switch’ is and how it functions?

In industries like manufacturing and chemical processing, kill switches are common – they provide a safe way to stop a dangerous situation from getting out of control. How can we develop a ‘kill switch’ for AI so that if a machine goes rogue, we can still control and stop it from creating harm? When we talk of an AI ‘kill switch,’ we are not talking about one switch, and it would not be a physical switch either. There is a single kill switch per model based on its identity; it will have unique identities from training to production, protecting it at every stage. AI is just another machine, and understanding this will eliminate breaches we’ve seen time and time again where identity security – of APIs, of code, of cloud, of malware – has been an afterthought.

To Read Full Article, Visit @ https://ai-techpark.com/aitech-interview-with-kevin-bocek/

Related Articles –

Five Best Augmented Analytics Tools

CIOs to Enhance the Customer Experience