The Future is Now, But It Must Be Human-Led
Insights from the peopleHUB webinar with Gavin (GoodShip*) reveal a crucial truth for HR and business leaders: Artificial Intelligence isn't primarily a technology challenge; it's a human and cultural one. AI's power lies not in replacing people, but in freeing them up to be more human. The goal is a measured, safe, and deliberate transition to an AI-enabled workplace.
The Strategic Shift: Embracing the 60/40 Model
The core of this strategy is the "Goodship model," which advocates for a fundamental shift in how we allocate time and resources:
- Semi-Automate 60% (The Grunt Work): Use AI agents and tools to handle the repetitive, repeatable, and tedious tasks, the "grunt work" with strict human oversight.
- Free Up 40% (The Human Work): Direct this time towards activities that truly require human judgment, creativity, strategy, and genuine connection.
This approach ensures that AI serves as a powerful enabler, shifting employee focus from simply performing tasks to adding distinct, strategic value. The cultural outcome is to restore conversations, calls, and in-person interactions, making work more human again.
The New Core Skills: Beyond 'Soft'
In an AI-driven environment, the distinction between "hard" and "soft" skills collapses. The following are now essential competencies:
- Articulation & Instruction-Following: The ability to communicate precisely, iteratively, and clearly with an AI model (e.g., training it to follow specific style guides like, "never use an em dash").
- Asking Probing Questions: Pushing AI beyond its default 'vanilla' or agreeable responses by instructing it to adopt professional roles, challenge assumptions, or seek multiple viewpoints.
- Sense-Checking & Validation: Using tools like Perplexity Pro for cited research, but retaining the discipline to read the sources and slow down to digest results, ensuring accuracy and brand integrity.
- Creative Problem Solving: Identifying and leveraging internal assets, particularly the strengths of neurodiversity, to formulate complex prompts that yield truly innovative outcomes.
Implementation and Safety: The Guardrails
Strategy is nothing without practical, secure execution. Leaders must treat this as a marathon with necessary sprints of product adoption, underpinned by rigorous governance.
1. The Immediate Governance: Defining Your Policy
The most critical quick-start is establishing an AI Usage Policy. This is a guardrail, not a roadblock:
- Data Privacy First: Define a clear boundary: Never input sensitive, proprietary data, or client Personally Identifiable Information (PII) into free, public-facing AI tools.
- Prefer Enterprise Ecosystems: Standardise on paid/enterprise platforms (e.g., Copilot within SharePoint, Gemini within Google Workspace) for their stronger data protection and security.
- Audit Privacy Settings: Immediately conduct an audit of external work platforms (e.g., Chat GPT) to ensure you have explicitly toggled off any setting that uses your work data to train external models.
2. The Agent Onboarding Process
Treat AI agents like new staff members to manage risk and maximise output:
- Staged Onboarding: Pilot internal agents and custom micro-tools on encrypted infrastructure with limited, safe data sets first. Only grant wider access incrementally as performance is measured.
- Training & Customisation: Use tools like Google Notebook LLM to build closed, internal knowledge bases trained only on your company’s specific language, context, and proprietary information.
3. The Minimal Tool Stack
Avoid fragmentation and license waste by limiting tool sprawl. Start with a small, high-impact stack (aiming for 5 key tools) where each one serves a defined, strategic purpose (e.g., Granola for meeting notes, Canva AI for rapid brand-aligned visuals, a primary LLM like Claude for depth).
Sustaining the Future: Personalised Learning
To make the transition stick, companies must invest in continuous, role-specific learning. This approach moves beyond generic training to practical application:
- The Adaptive Academy Model: Programs, such as the Liverpool Chambers of Commerce (LCC) AI Academy launching in January 2026, exemplify the future of upskilling. They offer a 15-month adaptive program with a "choose your own path" style, personalising the learning journey by role and industry.
- Hands-on Focus: The curriculum moves through Fundamentals, Proficiency, and Mastery, using role-specific assignments (e.g., how a marketer uses AI for content vs. how a data analyst uses it for research) to embed skills directly into daily workflows.
- External Collaboration: Businesses should proactively engage local universities for internships in data science and creative fields to gain cheap, fresh support for making their internal data ready for complex AI application.
The Final Call to Action
The AI revolution is a marathon, but immediate, well-governed action is essential. To secure an insightful and successful future of work, leaders must:
- Define and Communicate an AI usage policy today.
- Audit current platform privacy settings.
- Adopt a minimal, high-impact tool stack.
- Encourage "playtime" by setting a simple weekly research prompt for all staff to develop their essential skills.
- Measure progress towards the 60/40 automation split.
By managing the risk, teaching the skills, and actively restoring the human element to work, businesses can ensure they lead the AI revolution, rather than simply reacting to it.