As developers increasingly lean on AI-generated code to build out their software—as they have with open source in the past—they risk introducing critical security failures along the way.
I’m a dev and work with some devOPs, and you nailed my experience with them exactly! Here are some projects I’ve seen them build:
open web ui (self-hosted AI) with some custom logic to verify an API key; it’s only available on the VPN/LAN, but IT has rules; basically ended up being a bit of lua in nginx
some JS and Python to add some widgets to the app (stuff like reporting issues)
random lambdas and other scripts to check server health
I remember doing all that stuff when I worked at a startup, and it’s nice to just see things get automated.
Like half my last role was pretty much automation. Which is sorta good and I guess maybe why devops is a better way to look at it. Back when it was just ops it seemed like they would never give time to get things like automation done.
I’m a dev and work with some devOPs, and you nailed my experience with them exactly! Here are some projects I’ve seen them build:
I remember doing all that stuff when I worked at a startup, and it’s nice to just see things get automated.
Like half my last role was pretty much automation. Which is sorta good and I guess maybe why devops is a better way to look at it. Back when it was just ops it seemed like they would never give time to get things like automation done.