Deus Ex Humana: The machines need us, too.
Generative AI is your newest digital coworker. Is your organizational culture ready?
It’s June 2024, and the Generative AI hype cycle remains alive and well as ever. Regardless of your personal perspective on the technology as it exists today, it seems increasingly likely that it will usher in a meaningful transformation in our personal and professional lives. Research has predicted that two-thirds of global occupations will be partially automated by AI, and 44% of workers’ core skills are expected to change in the next five years. And in Deloitte’s most recent State of GenAI in the Enterprise report, 75% of respondents said they expect the technology to meaningfully affect their talent strategies within just two years.
I’ve already seen the dramatic change in how my friends and colleagues approach work. With every new model release, it seems, new champions are made of skeptics.
To prepare for this transformation, leaders have become very focused on AI adoption and upskilling in the near term, identifying key learning needs across the organization and investing in internal and external programs to train employees on new skills, like prompt engineering, integrating Generative AI into technical workflows, and using GenAI-powered tools. Educators have been asking the same questions, faced with challenge of teaching a generation who wants to leverage the revolutionary tool at their fingertips.
While this focus on upskilling is great (truly!) it’s mostly been constrained to technical training. One may assume that, when asked what skills their employees are likely to need in an AI-powered future, executives might prioritize technical skills like fine-tuning models. But leaders surveyed in the State of GenAI in the Enterprise report prioritized strategic, human capabilities like critical thinking and problem solving (62%), creativity (59%), and resilience (58%). This is only counter-intuitive at first glance.
Generative AI is likely to become an active part of most daily workflows without fully automating them. And unlike many other tools we use in our daily lives, we increasingly interact with AI models through natural language—what is prompt engineering if not delegating a task to a team member?
We should be thinking about Generative AI not as a tool, but as a digital coworker.
Generative AI transformation is thus less of a traditional tech adoption exercise and more of a cultural transformation. It requires a shift in behavior and the systems that incentivize those behaviors. Organizations may have to figure out ways to encourage higher levels of experimentation (as I wrote about in my last piece), courage, knowledge-sharing, and risk-taking (within reason). And individuals should develop the enduring human capabilities (i.e., critical thinking, creativity, resilience) that can make them stand out, even once we are all powered by the same digital coworkers.
A sampling of these newly prioritized human capabilities could include:
AI task management: We may all, in some sense, soon become managers. As AI accomplishes more and more tasks, we will need to effectively delegate activities to our digital coworkers and provide feedback in a way that the models understand.
Quality control: One of the fundamental challenges with Generative AI is its tendency to hallucinate. Thanks to our history with computers, people tend to trust the outputs of technology implicitly, but the era of simple calculation and retrieval is no more. As AI takes on more complicated knowledge and abilities, we will need to review the quality and accuracy of everything it produces, which could be very time-intensive.
Critical thinking: We may need to determine when it makes sense to use AI to accomplish tasks, which tools are best suited to certain outcomes, and which outputs are worth using. Generative AI is relatively good at generating content or analyzing data, but it is still an AI model. It is not really ‘thinking,’ but rather predicting based on inputs. People will likely always have more contextualized knowledge than our models, and it’s important to apply that context rigorously.
Ethics: AI models fundamentally cannot discern ‘right’ from ‘wrong’. We will need to make sure that we are using and protecting data and IP appropriately, but also that our work is ethical and truthful in nature.
In short, as a business leader, learning leader, manager, or individual using Generative AI, don’t lose sight of the bigger picture of what it means to truly transform work. Technical trainings are good—and necessary—but the greatest advantage any human can have in this world is, well, being human.
Daniela (Dany) Rifkin | Senior Consultant, Deloitte Consulting LLP