With the sudden and rapid rise in popularity of ChatGPT, how do we think AI will change the FinOps Framework? Will we still have the same domains and capabilities in 12 months’ time, or will they look quite different?
Chores, “finding and fixing” processes are becoming automatic.
The AI can apply rule-based patterns and detect how to reduce cost. It can also propose new patterns. Lastly, it can run experiments, but experiments need time and cost, and can affect the system.
AI can also help integrating and communicating. This is going to be hybrid human/AI.
All the business side is staying human. All creativity is staying human.
First, I would start with what we know Gen AI is good at: reading huge quantities of data and allowing questions to be asked about it.
In the cloud, we have data, a lot data, starting from the obvious cost data, but also historical configurations, IaC scripts, past decisions, software documentation, API calls, ERP data, etc… The data does not need to be structured and can be loosely connected by date.
Having an AI ingest all of that and be ready to answer questions means that what Ermanno calls creativity is, in my opinion, asking the right questions, and this will need a person.
It feels that AI will continue a trend that I think is already becoming apparent: FinOps look a lot like the move to the cloud lifecycle. It is a huge project to start a good FinOps practice (like a migration would be), and it will take time, people, and money. But once done, it should be business as usual, needing a limited amount of people, while AI can be interrogated to get quality data.
In very simplistic terms:
- The days of FinOps Practitioners having to hunt down efficiencies will be over - AI driven CCMO tools (both cloud native and 3rd-party) will replace that aspect of the role.
- There will be a need for FinOps Practitioners to evolve to have greater emphasis on cultural, communicative and educational aspects of the role. That being said, as @frank says, the true economic decisions from recommendations from AI-driven engines will need humans.
I have two specific considerations I am thinking about in this space:
a. Is the cost of AI and the value it brings going to be greater than the FTE costs?
b. The true implications of economics will be driven further forward (namely the short, medium and long term implications on everyone of a value decision - not just those immediately around us - think of the “Broken Window” Paradox) must be done by humans, albeit I am sure that AI modelling will help.