A guide for leveraging Google’s ecosystem to transform raw web data into actionable, organized knowledge.
By Peter Sigurdson (@Peter_Sigurdson) March 14, 2026
Hey everyone, Peter here from Toronto, diving into the world of AI tools that are revolutionizing how we research and organize information.
As someone passionate about leveraging AI to boost productivity—whether you’re a solo creator like me or part of a massive enterprise—I’m excited to share my insights on Google Gemini Deep Research.
This feature, powered by advanced models like Gemini 1.5 Pro, acts as your personal research assistant, automating complex queries and synthesizing data from the web and your own files.
Today, lets break down what it is, how to use it, some best practices, and why integrating it with NotebookLM turns it into your ultimate source of truth.
Next, I’ll evangelize its game-changing use cases for data analysts, students, researchers, and business project planners.
Let’s get started!
What is Google Gemini Deep Research?
Google Gemini Deep Research is an AI-driven tool embedded within the Gemini platform (and now NotebookLM) that handles in-depth, multi-step research tasks.
It browses hundreds of websites, ranks sources for relevance and credibility, and compiles everything into a structured report complete with summaries, tables, charts, and citations.
Unlike a simple search, it plans the research, executes it autonomously, and even incorporates real-time data from Google Workspace (like Gmail, Drive, or Chat) or uploaded files.
Think of it as an upgrade from traditional search engines:
It doesn’t just list links; it digests and organizes information, saving you hours of manual work.
In the 1990s, we spoke of the Internet as the global brain; in fact, the Internet was the global filing cabinet: some hierarchical organization but no cognition layer. Tools like Gemini and Google Notebook are the global brain. The promise is finally here.
Available in Gemini Advanced or higher tiers, it uses models like Gemini 1.5 Pro for high-quality outputs, and its integration with NotebookLM makes it even more powerful for grounded, verifiable insights.
How to Use Google Gemini Deep Research
Getting started is straightforward, whether you’re on the web, mobile, or integrated into NotebookLM.
Sign in with your Google account—Gemini Advanced (paid) unlocks the full potential.
Craft Your Prompt: Enter a detailed query, like “Analyze recent breakthroughs in quantum computing and their business implications.” Be specific to guide the AI better.
Activate Deep Research: In Gemini, select “Deep Research” from the tools menu. In NotebookLM, toggle the “Search the web for sources” option or directly use the Deep Research button to import web-sourced reports.
Add Sources: Include files from Drive, emails, or existing NotebookLM notes. The AI will blend web data with your personal context.
Review and Run: Gemini generates a research plan—edit it if needed. Hit “Start Research,” and wait a few minutes as it processes.
Refine the Output: View the report, ask follow-ups like “Expand on this section,” and export to Docs or Sheets.
In NotebookLM, you can directly import the Deep Research report and its sources into your notebook for further analysis, like generating audio overviews or mind maps.
Best Practices for Maximizing Gemini Deep Research
To get the most out of this tool, follow these tips drawn from the workflows described in the Google Gemini Best Practices Guide:
Be Precise with Prompts: Vague queries yield broad results. Instead, specify scope, like “Focus on peer-reviewed sources from the last 5 years” or “Include comparisons in table format.”
Verify Sources: Always cross-check citations—Deep Research prioritizes reputable sites, but human oversight is key for accuracy.
Iterate Incrementally: Start with a broad query, then refine with follow-ups. This builds layered insights without overwhelming the AI.
Combine with Workspace Data: For personalized research, enable access to your emails or docs to contextualize web findings.
Manage Limits: Free tiers have basic access; upgrade for more requests and advanced features like visuals. Monitor usage to avoid hitting daily caps.
Integrate Early with NotebookLM: Feed reports directly into NotebookLM to ground your knowledge base and reduce hallucinations.
By adhering to these, you’ll turn raw data into actionable intelligence efficiently.
Integrating with Google NotebookLM: Your Source of Truth
NotebookLM is Google’s AI-powered knowledge management app that transforms your documents into interactive, queryable knowledge bases using Retrieval-Augmented Generation (RAG).
At its core, NotebookLM tackles a problem that organizations have struggled with for decades. The 1998 book If Only We Knew What We Know (Amazon) put a name to it: the staggering financial cost of “dark data” — the tacit knowledge, informal connections, and institutional insight trapped inside people’s heads and scattered across disconnected systems.
Project teams today are solving this by feeding NotebookLM a rich diet of raw, unstructured content: meeting transcripts from tools like Otter.ai and Granola.ai, whiteboard photos, handwritten notes, emails, and project plans. The messy, disorganized reality of how knowledge actually lives inside a team — NotebookLM handles all of it. You present NoteBookLM with the templates for your reports, and setup a connector for it to deposit the final report directly into your System.
This is something IBM tried to achieve in the 1990s, but it collapsed under its own weight. The bottleneck wasn’t vision; it was the sheer human effort required to correlate and normalize all that scattered data into a coherent format.
Nobody had the time.
Now, Google Gemini AI does the heavy lifting automatically — and goes further, surfacing higher-level insights that practitioners deep in the day-to-day weeds often can’t see for themselves.
Perhaps nowhere is this more transformative than in the long-neglected ritual of the Lessons Learned report.
For generations, project managers dutifully included it as the final deliverable in every engagement — and almost no one ever wrote it.
By the time the project wrapped, the team had already scattered to the next assignment, deadlines had piled up, and the hard-won insights from months of real work quietly evaporated.
The institutional knowledge that could have made the next project faster, cheaper, and smarter simply disappeared.
Now that cycle is broken. Feed NotebookLM your template, point it at everything the team generated throughout the engagement, and Gemini AI will produce a structured, insight-rich end-of-project report — surfacing patterns, recurring friction points, and process improvements that no one had the bandwidth to articulate at the time.
The lessons your team actually learned finally get captured, automatically.
Best of all, NotebookLM grounds every response strictly in your uploaded sources, dramatically reducing hallucinations and making it the definitive source of truth for any organization, large or small.
The magic happens with Deep Research integration: Run a query in NotebookLM, enable web search, and it uses Gemini to fetch and import reports plus original sources automatically. This creates a verifiable repository where you can:
Generate mind maps, timelines, or audio overviews from the combined data.
Query your notebook like a chatbot, always citing sources.
Share notebooks across teams for collaborative truth-seeking.
Whether you’re a one-person shop or a large enterprise, NotebookLM centralizes knowledge, ensuring everyone works from the same factual foundation. It’s a game-changer for compliance, education, and decision-making.
Use Cases: Why You Should Adopt This Now
Gemini Deep Research, supercharged by NotebookLM, isn’t just a tool—it’s a productivity multiplier. Here’s how it shines for specific roles:
For Data Analysts
Dive into market trends or datasets without endless browsing. Use Deep Research to compile stats on “AI adoption in finance,” then import to NotebookLM for custom visualizations and queries like “Correlate these trends with economic indicators.” It speeds up reporting and uncovers hidden patterns.
For Students
Tackle assignments efficiently: Research “Climate change impacts on agriculture,” get a cited report, and use NotebookLM to create study guides or podcasts. It’s ideal for learning complex topics, with built-in source verification to avoid plagiarism pitfalls.
For Researchers
Academic or scientific pros can query “Latest advancements in biotech,” pulling from reputable sources. NotebookLM then organizes it into mind maps or timelines, facilitating hypothesis building and literature reviews with minimal hallucinations.
For Business Project Planners
Plan strategies with “Competitive analysis of EV markets.” Deep Research provides overviews, and NotebookLM turns them into shared knowledge bases for team alignment, risk assessment, and forecasting—perfect for agile planning in enterprises.
In all cases, this combo democratizes deep research, making expert-level insights accessible and trustworthy.
Final Thoughts: Embrace the Future of Research
If you’re not using Gemini Deep Research with NotebookLM yet, you’re missing out on a transformative workflow. It saves time, enhances accuracy, and scales from individual projects to enterprise knowledge management. As AI evolves, tools like these will be essential—start today and watch your productivity soar. Have questions? Hit me up on X @Peter_Sigurdson!
Preview: Diving into AI-Driven Development with Rust and Vertex AI
Welcome to the first installment in our new blog series on modern AI application building! If you’re excited about blending cutting-edge AI with robust programming practices, you’re in the right place.
This post kicks off with a deep dive into creating a Retrieval-Augmented Generation (RAG) AI application, focusing on the foundational backend setup.
We’ll explore why Rust is the star for performance and safety, how we’re evolving the classic MVC architecture by swapping static controllers for dynamic AI calls, and why Vertex AI trumps traditional databases for intelligent, knowledge-grounded systems.
In this article, you’ll get hands-on with:
Key concepts like Rust’s advantages (complete with a comparison table), the MVC shift, and Vertex AI’s role in RAG.
Step-by-step guidance to build a Rust backend that integrates with Vertex AI for real-time query handling and response generation.
An architectural diagram to visualize the data flow.
Don’t worry if you’re wondering about the front-end—we’re keeping this post laser-focused on the core AI and backend magic to make it digestible and actionable.
Project Workflow: Local Rust Backend with Vertex AI Integration
In this initial version of our RAG AI project, we’re adopting a straightforward, developer-centric workflow that emphasizes local experimentation and customization.
This setup keeps things simple for hands-on learning, allowing you to focus on the AI magic without worrying about complex deployments right away. Here’s how it breaks down:
Hard-Coding RAG Query Terms in Rust:
As the developer, you’ll embed specific RAG-related terms or prompt structures directly into the Rust code. For example, within the backend handler (e.g., the rag_query function), you can define fixed query patterns, corpus references, or augmentation logic tailored to your use case. This hard-coding approach ensures predictable behavior during testing and serves as a foundation for more dynamic features later.
Writing Rust Code with Vertex AI Integration:
We’ll build the core application in Rust, leveraging its performance and safety features. The code will use the Google Cloud AI Platform SDK to connect to Vertex AI, which acts as our “database” for RAG operations. Vertex AI handles the heavy lifting: storing your custom data corpus (e.g., documents uploaded to Google Cloud Storage), performing vector-based retrieval for relevance, and generating augmented responses via models like Gemini.
Local Hosting on Your Development Machine:
The Rust application runs entirely on your local operating system (e.g., macOS, Linux, or Windows). Simply use cargo run to spin up the server, which binds to a local address like 127.0.0.1:8080. This means no cloud hosting is required initially—test queries via tools like curl or Postman right from your machine. It’s perfect for rapid iteration, debugging, and verifying AI outputs before scaling.
This workflow embodies the “start small, think big” philosophy of modern AI development. In future parts of the series, we’ll expand to dynamic querying, cloud deployment (e.g., on Google Cloud Run), and a web front-end for user interaction. For now, it keeps the barrier low while delivering powerful RAG capabilities! In next week’s lesson, we will roll out using Node Express to build a front-ending web application.
In Part 2 (coming soon), we’ll roll out a sleek web front-end to interact with your RAG app, turning it into a full-fledged user-facing tool.
Future parts explore scaling, advanced customizations, and real-world deployments.
Stay tuned, and let’s build the future of AI apps together!
By the end of this guide, you’ll have built a fully functional Retrieval-Augmented Generation (RAG) AI application.
Imagine creating a smart Q&A system that doesn’t just regurgitate pre-programmed responses but pulls in real-time, relevant knowledge from vast data sources to generate accurate, context-aware answers.
This app will demonstrate the cutting-edge paradigm that’s sweeping the AI world: combining large language models (LLMs) with retrieval mechanisms to ground responses in your own data.
You’ll use Rust for a robust backend, integrate with Google’s Vertex AI for RAG capabilities, and see how this setup transforms traditional development into something dynamic and scalable.
Whether you’re building chatbots, knowledge bases, or intelligent assistants, this is the hands-on blueprint for work-a-day AI developers in 2026.
Learning Outcomes: Your Journey to AI Mastery
Picture this: You’ve just deployed an app that answers complex queries by fetching precise information from a custom corpus—say, company docs or research papers—and augmenting it with generative AI for natural, insightful responses.
No more static databases or brittle logic; instead, you’re harnessing the power of real-time AI calls to make your apps smarter and more adaptive.
By the end of this guide, you’ll achieve:
Hands-On Vertex AI Expertise: You’ll set up and interact with Vertex AI, Google’s enterprise-grade platform for generative AI, including uploading data, creating corpora, and running RAG tasks.
Mastery of RAG: Understand and implement Retrieval-Augmented Generation, a technique that combines vector search for relevant data retrieval with LLMs for response generation, reducing hallucinations and boosting accuracy.
Practical AI Integration: Move beyond chat-based AI experiments to embedding API calls into real applications, replacing rigid code with flexible, knowledge-accessing intelligence.
Rust Proficiency in AI Contexts: Gain experience using Rust’s safety and performance for backend services that handle AI interactions securely and efficiently.
This isn’t theory—it’s actionable development that positions you at the forefront of AI app building.
Core Concepts: Laying the Foundation
Before we dive into the build, let’s unpack the key ideas.
We’ll start with why Rust is our language of choice, then explore the evolution of MVC architecture, the shift to AI-driven controllers, and why Vertex AI outshines traditional databases for modern AI apps.
Why We’re Using Rust: Transitioning from Older Languages
Rust is revolutionizing systems and backend development, especially in AI applications where performance, safety, and concurrency are paramount. Born from Mozilla’s need for a safer alternative to C++, Rust has evolved into a powerhouse for building reliable, high-speed software without the pitfalls of older languages like C++ or Java.
The transition to Rust stems from its unique ability to deliver C-like performance while enforcing memory safety at compile time—no garbage collector (GC) needed, unlike Java.
This means predictable runtime behavior, crucial for AI workloads involving real-time data processing or edge deployments.
In AI coding, Rust’s explicitness aligns well with AI-assisted development, reducing bugs that plague legacy codebases. It’s cross-platform, low-bloat, and excels in scenarios like machine learning workflows or secure API integrations.
To highlight Rust’s advantages, here’s a comparison table with older languages (C++ and Java), focused on dimensions relevant to AI applications:
Dimension
Rust
C++
Java
Performance
Native compilation with zero-cost abstractions; matches C++ speed but with optimizations for AI tasks like vector processing.
High performance via direct hardware access; excellent for compute-intensive AI but requires manual optimizations.
Good runtime via JIT compilation; slower startup and GC pauses can hinder real-time AI inference.
Memory Management
Ownership model ensures safety without GC; prevents leaks and races at compile time, ideal for safe AI data handling.
Manual management (new/delete); prone to leaks, overflows, and undefined behavior in complex AI code.
Automatic GC; safe but introduces pauses and overhead, less suitable for low-latency AI edges.
Concurrency
Built-in safety via Send/Sync traits; compile-time checks prevent data races, perfect for parallel AI training or inference.
Powerful but error-prone; requires careful locking to avoid races in multi-threaded AI apps.
Thread-safe with monitors; GC can complicate high-concurrency AI scenarios.
Safety
Memory and thread safety guaranteed; eliminates null pointers and buffer overflows, reducing AI runtime errors.
Low-level control but high risk of vulnerabilities; common in AI for exploits.
Strong type safety and GC; safer than C++ but less fine-grained control.
Ecosystem for AI
Growing SDKs (e.g., official Google Cloud Rust SDK for Vertex AI); crates for ML like tch-rs; focuses on secure, performant backends.
Mature libraries (TensorFlow, PyTorch C++ APIs); battle-tested for AI research.
Vast via JVM (e.g., Deeplearning4j); enterprise-friendly but heavier footprint.
Learning Curve
Steep due to ownership; rewarding for AI devs seeking reliability.
Complex syntax and legacy; high for safe AI code.
Moderate; familiar OO but GC hides low-level AI optimizations.
Rust’s edge in AI? It lets you build crash-free, efficient systems without sacrificing speed—think deploying RAG backends that handle massive queries securely.
The Evolution of MVC
Traditionally, applications adhered to the Model-View-Controller (MVC) architecture, a pattern that separates concerns for maintainable code.
The Model manages data and business logic, the View handles user interfaces, and the Controller acts as the intermediary—processing inputs, updating models, and refreshing views.
In classic MVC, controllers were essentially sets of objects: “little if-this-then-that engines” encoded with hardcoded rules. For example, in a web app, a controller might check user permissions via if-else chains before fetching data.
This worked for predictable scenarios but scaled poorly with complexity, leading to bloated, brittle code.
Replacing the Controller with AI
Enter the AI era: We’re ditching static controllers for dynamic API calls to AI engines like Vertex AI.
This shift turns your app from a rule-bound machine into a knowledge-accessing powerhouse. Instead of predefined logic, you query an AI model in real time, leveraging its vast training data and reasoning capabilities.
For instance, in our RAG app, the “controller” becomes a Vertex AI call that retrieves relevant docs and generates responses.
This enables real-time adaptation—handling nuanced queries without code rewrites.
The result?
Apps that tap into the world’s knowledge, reducing development time and enhancing intelligence.
Vertex AI vs. Traditional Databases
Why swap traditional SQL databases for Vertex AI?
SQL excels at structured, relational data with exact-match queries (e.g., SELECT * FROM users WHERE id=123).
But for AI apps dealing with unstructured text, like documents or user queries, SQL falls short—it can’t handle semantic search or context-aware retrieval.
Hands-On: Building Your RAG AI Application
Now, let’s build! We’ll create a Rust backend that interacts with Vertex AI for RAG.
Prerequisites:
Google Cloud account, enabled Vertex AI API, and Rust installed.
Vertex AI, Google’s managed AI platform, flips this with RAG Engine: It uses vector embeddings for similarity search, pulling relevant info before generation. This “retrieval-augmented” approach grounds LLMs in your data, minimizing errors. Plus, it’s scalable, managed, and integrates seamlessly with tools like Google Cloud Storage—no need for manual indexing like in older DBs.
Step 1: Set Up Your Environment
Create a Google Cloud project and enable Vertex AI.
Install the Google Cloud Rust SDK: Add to Cargo.toml:
[dependencies]
google-cloud-aiplatform = "0.1" # Check latest version
tokio = { version = "1", features = ["full"] }
serde = { version = "1", features = ["derive"] }
Authenticate: Use gcloud auth application-default login.
Step 2: Upload Data and Create a RAG Corpus
Use Vertex AI console or SDK to upload docs to Cloud Storage and create a corpus.
In Rust, leverage the SDK for automation (adapt from Python tutorials via REST if needed).
Step 3: Implement the Rust Backend
Here’s a simple async server using Actix-Web that queries Vertex AI for RAG:
use actix_web::{web, App, HttpResponse, HttpServer, Responder};
use google_cloud_aiplatform::v1::predict_request::Content;
use google_cloud_aiplatform::v1::{GenerateContentRequest, Part};
use google_cloud_aiplatform::Client;
use serde::{Deserialize, Serialize};
use std::env;
#[derive(Deserialize)]
struct Query {
question: String,
}
#[derive(Serialize)]
struct Response {
answer: String,
}
async fn rag_query(data: web::Json<Query>) -> impl Responder {
let project_id = env::var("PROJECT_ID").unwrap();
let location = "us-central1"; // Adjust as needed
let model = "gemini-1.5-pro"; // Or your preferred model
let client = Client::new(project_id, location).await.unwrap();
let mut request = GenerateContentRequest::default();
request.contents.push(Content {
role: "user".to_string(),
parts: vec![Part::Text(format!("Retrieve and generate: {}", data.question))],
});
// Add RAG config: Specify corpus for retrieval
// (Adapt based on SDK docs; use REST if SDK lacks direct RAG support)
let response = client.generate_content(&request).await.unwrap();
let answer = response.candidates[0].content.parts[0].text.clone().unwrap_or_default();
HttpResponse::Ok().json(Response { answer })
}
#[actix_web::main]
async fn main() -> std::io::Result<()> {
HttpServer::new(|| {
App::new().route("/query", web::post().to(rag_query))
})
.bind(("127.0.0.1", 8080))?
.run()
.await
}
This replaces a traditional controller with an AI call. For full RAG, configure the request with your corpus ID. Run with cargo run and test via POST to /query.
Step 4: Deploy and Scale
Deploy on Cloud Run for serverless scaling.
Monitor with Vertex AI tools.
Conclusion: Embrace the New Paradigm
You’ve now built a RAG app that exemplifies modern AI development—safe, performant, and intelligent.
This shift from static MVC to AI-augmented architectures is how work-a-day developers are innovating today.
Dive in, experiment, and share your creations!
For more, explore Vertex AI docs or Rust’s Google Cloud SDK.
Today, we explore how artificial intelligence is reshaping the way we lead, manage, and analyze in today’s fast-paced business world.
In the grand tapestry of human progress, certain inventions have not merely advanced technology—they’ve redefined civilization itself. The movable type printing press democratized knowledge, transforming isolated manuscripts into mass-produced books that standardized ideas and sparked the Renaissance. The Industrial Revolution amplified this by mechanizing production, creating network effects that distributed not just goods, but scalable processes for building empires of efficiency. Today, we stand at the threshold of another seismic shift: the rise of AI ecosystems like Anthropic’s Claude, which amalgamate and extend these legacies into a new era of human thought.
Claude isn’t just an assistant; it represents a foundational evolution where AI becomes the ultimate synthesizer of knowledge—not as static catalogs of facts, but as dynamic processes that guide us on how to act, innovate, and construct superior operational realities. By embedding reasoning layers between human intent and automated execution, tools like Claude standardize the creation and distribution of procedural intelligence on a global scale. Imagine the printing press’s mass dissemination fused with the Industrial Revolution’s systematic workflows: AI now enables instant, governed amplification of expertise, turning individual insights into organizational superpowers. This isn’t incremental improvement—it’s a civilizational upgrade, empowering business leaders, managers, data analysts, and project managers to leverage our collective understanding of the world in ways that build resilient, adaptive enterprises.
As we delve into Claude’s ecosystem, we’ll see how this shift simplifies complexity, positions AI as an integral operating layer, and equips you to stay ahead in an AI-driven landscape.
Unlocking Claude’s AI Ecosystem: From Simple Assistant to Strategic Business Enabler
Let’s explore how artificial intelligence is reshaping the way we lead, manage, and analyze in today’s fast-paced business world.
As a business leader, manager, data analyst, or project manager, you’re no doubt feeling the pressure to integrate AI tools that don’t just promise efficiency—they deliver it.
Today’s focus: Anthropic’s Claude, which has evolved far beyond a basic chatbot into a robust ecosystem that’s acting as the “reasoning layer” between human strategy and real-world execution.
The real magic isn’t in the models getting smarter; it’s in how systems like Claude are bridging the gap between ideas and action.
Imagine AI not just suggesting a plan, but governing the safe transition from proposal to implementation—ensuring that every step aligns with your business goals without unintended risks. This shift dissolves the traditional boundaries between AI and your operational environment, turning Claude into something akin to a new operating layer for your workflows.
But does this simplify your daily grind, or are we piecing together a fresh AI-centric “operating system”? As early adopters, organizations that explore this now will gain a competitive edge, transforming Claude from a tool into a core part of how you strategize, execute, and innovate.
In this post, I’ll break down Claude’s key components in straightforward terms, then share five practical use cases with step-by-step workflows you can implement today.
Whether you’re optimizing data pipelines, streamlining project management, or driving data-driven decisions, Claude’s ecosystem—spanning conversational interfaces, coding automation, collaborative workflows, customizable skills, and seamless integrations—offers tangible ways to boost productivity and reduce overhead.
Claude’s Ecosystem: A Quick Primer for Business Professionals
At its core, Claude started as a conversational AI assistant (Claude.ai), much like a virtual consultant for brainstorming and analysis.
But it’s rapidly expanding into a full AI toolkit, where the model serves as the intelligent intermediary between your directives and automated execution. Here’s how it breaks down:
Claude.ai: Your go-to for general tasks like drafting reports, summarizing research, or exploring strategic ideas. It’s versatile, requiring no setup, and excels at handling complex, long-form content—perfect for managers juggling multiple priorities.
Claude Code: This brings AI directly into development workflows. Think of it as an autonomous coding partner that can debug, refactor, or build features across your codebase, all from your terminal or IDE. For project managers overseeing tech teams, it’s a game-changer for accelerating delivery without constant hand-holding.
Claude Cowork: A desktop app focused on non-coding automation, like organizing files, extracting data from PDFs, or automating repetitive tasks across applications. Data analysts will love how it handles bulk operations, freeing up time for high-value insights.
Skills and Model Context Protocol (MCP): Skills are reusable templates that make Claude more reliable for specific tasks, like generating branded reports. MCP is the “glue” that connects Claude to external tools, data sources, and systems, enabling secure, standardized integrations. This is where the architectural shift shines: AI isn’t just advising—it’s executing with governed authority, ensuring actions are reviewed and aligned before committing changes.
The evolution here is profound. Interfaces are expanding faster than use cases can fully mature, moving from isolated chats to layered ecosystems involving code, workflows, skills, and integrations. When AI transitions from answering questions to taking actions, the possibilities multiply—but so does the need for thoughtful governance. For business leaders, this means operationalizing AI in ways that simplify work rather than complicate it, positioning Claude as a strategic asset rather than just another tool.
Now, let’s get practical. Here are five use cases tailored for your roles, each with detailed, actionable steps to start today. These leverage Claude’s free or Pro tiers (sign up at anthropic.com if you haven’t), assuming basic access to tools like Google Drive or your local files.
Use Case 1: Streamlining Market Research and Report Generation (For Business Leaders and Data Analysts)
In a competitive market, quickly synthesizing research into actionable insights is key. Use Claude.ai with Skills to automate summarization and reporting, turning raw data into polished executive briefs.
Step-by-Step Workflow:
Prepare Your Inputs: Gather research materials (e.g., PDFs of industry reports, spreadsheets of market data) and upload them to Claude.ai via the chat interface.
Set Up a Skill: In Claude.ai, create a custom Skill by prompting: “Create a Skill for market research summarization: Extract key trends, competitors, and opportunities from uploaded documents, then format as a 2-page executive summary with bullet points and recommendations.”
Initiate the Task: Prompt Claude: “Using the Market Research Skill, analyze these uploaded files [attach files]. Focus on Q1 2026 trends in AI adoption for retail.”
Review and Refine: Claude generates the summary. Provide feedback like: “Add SWOT analysis and prioritize recommendations by ROI potential.”
Export and Share: Copy the output to your preferred tool (e.g., Google Docs), review for accuracy, and distribute via LinkedIn or email. Track time saved—aim for 50% reduction in manual synthesis.
This workflow positions AI as your reasoning layer, ensuring insights are executable while you maintain oversight.
Use Case 2: Accelerating Software Project Debugging (For Project Managers and Tech Teams)
Project delays from code bugs can derail timelines. Claude Code acts as an on-demand debugger, reviewing codebases and suggesting fixes to keep sprints on track.
Step-by-Step Workflow:
Install and Setup: Download Claude Code from anthropic.com/code (requires Pro subscription). Integrate it into your IDE like VS Code.
Prepare the Codebase: Open your project repository in the terminal or IDE. Ensure it’s a Git repo for version control.
Prompt for Analysis: In the Claude Code interface, enter: “Analyze this codebase for bugs in the user authentication module. Trace potential issues and suggest multi-file fixes.”
Execute and Review: Claude proposes changes—review them carefully (e.g., via diff views) to govern the transition from suggestion to commit. Approve and let it apply edits autonomously.
Test and Iterate: Run tests (Claude can help write them), then prompt: “Fix any failures and optimize for performance.” Commit changes to Git.
Document Outcomes: Note efficiencies in your project management tool (e.g., Jira), such as reduced debugging time by 40%, and share with your team for feedback.
Here, Claude handles the heavy lifting, but you control the “state mutation”—ensuring safe, governed execution.
Use Case 3: Automating Data Extraction and File Organization (For Data Analysts and Managers)
Handling scattered data files eats into analysis time. Claude Cowork simplifies bulk organization and extraction, turning chaos into structured insights.
Step-by-Step Workflow:
Download the App: Get Claude Cowork from anthropic.com/cowork (beta access via Pro plan).
Gather Files: Collect PDFs, spreadsheets, and folders (e.g., quarterly sales data) on your desktop.
Define the Task: Prompt in the app: “Organize these folders: Rename files by date and category, extract revenue data from PDFs, and compile into a single Excel sheet.”
Automate Execution: Claude processes cross-app (e.g., pulling from Finder/Explorer), generating the output. Monitor progress in real-time.
Validate and Refine: Review the compiled sheet for accuracy; prompt adjustments like: “Add filters for regional breakdowns.”
Integrate into Workflow: Export to your BI tool (e.g., Tableau) and schedule similar tasks weekly. Measure impact: Faster data prep means more time for strategic analysis.
This use case highlights how Claude’s ecosystem simplifies repetitive work, acting as an operational layer without rebuilding your tech stack.
Use Case 4: Enhancing Project Planning and Brainstorming (For Project Managers)
Brainstorming ideas without structure leads to inefficiency. Use Claude.ai with MCP integrations to connect to project tools and generate governed plans.
Step-by-Step Workflow:
Connect Tools: In Claude.ai, enable MCP via settings to link with tools like Trello or Google Calendar (use Anthropic’s docs for setup).
Start Brainstorming: Prompt: “Brainstorm a 6-month AI implementation project plan, including milestones, risks, and resource allocation.”
Incorporate Data: Upload existing project docs; Claude pulls in external data via MCP (e.g., calendar availability).
Generate and Govern: Claude outputs a detailed plan. Review proposed actions: “Simulate risks and suggest mitigations.”
Execute Safely: Approve integrations (e.g., auto-create Trello cards), ensuring human oversight before committing.
Track and Iterate: Implement the plan, then follow up in Claude: “Update based on week 1 progress.” Log ROI in reduced planning time.
Early exploration here gives you an edge in operationalizing AI for agile project management.
Use Case 5: Customizing Compliance and Reporting Workflows (For Business Leaders)
Ensuring reports meet regulatory standards is tedious. Leverage Skills and Cowork for automated, branded compliance checks.
Step-by-Step Workflow:
Build a Skill: In Claude.ai, create: “Skill for compliance reporting: Scan documents for GDPR alignment, flag issues, and generate audit-ready summaries.”
Integrate with Cowork: Switch to Claude Cowork; upload compliance files (e.g., data privacy logs).
Run the Automation: Prompt: “Apply Compliance Skill to these files: Extract key data, check for gaps, and format as a branded report.”
Review Transitions: Claude proposes changes—approve before it mutates files (e.g., redacting sensitive info).
Finalize and Distribute: Export the report, add your executive summary, and share securely.
Measure Efficiency: Compare to manual processes; aim for 60% faster turnaround, freeing you for strategic oversight.
Final Thoughts: Embrace the AI Operating Layer
Claude’s journey from assistant to ecosystem underscores a broader trend: AI is becoming integral to how we work, not just a sidekick. By positioning it as a reasoning layer with governed execution, you’re not recreating complexity—you’re simplifying it. As business leaders, managers, analysts, and project managers, adopting these tools today means staying ahead in an AI-driven world.
What are your thoughts? Have you tried Claude’s ecosystem? Share in the comments—I’d love to hear how it’s impacting your workflows. Follow AI with Peter for more insights, and connect with me @Peter_Sigurdson on X.
Until next time, let’s keep pushing the boundaries of what’s possible when you shift left and use AI not as a better word processor, but as the operating system for your Enterprise.
Here’s a pattern I’ve lived through twice — and I’m watching it happen a third time.
In the mid-nineties, the web was exploding. Developers like me at IBM were already deep in the trenches — building early web architectures, wrestling with browsers that barely agreed on what a table was, inventing best practices as we went. Nobody had written the rulebook yet.
We were the ones writing it. It was glorious. Call it the Camelot of Technology Creation: a brief, electric window when the territory was unmapped, the protocols were still wet, and the people bold enough to show up early got to shape what everyone else would later call “standards.”
By the time colleges began teaching web development in the late nineties, the early movers had already done the formative work — established the frameworks, absorbed the hard lessons, and built the institutional culture that the rest of the industry would spend years catching up to.
But something else caught my attention in those years. As formally-trained new hires began joining our teams, I noticed a troubling pattern. These were credentialed people — they had the certificates, they knew the syntax, they could follow the prescribed steps. But they lacked something harder to name: genuine comprehension of what was actually happening beneath the surface.
We called it the Cargo Cult.
The term comes from anthropology: Pacific Islander communities who, after witnessing WWII supply planes land with extraordinary goods, built wooden replicas of airstrips and control towers — faithfully performing the rituals they’d observed, without understanding the underlying mechanisms that actually made the planes come. The ritual looked right. The planes never came.
Our new hires were doing the same thing with technology. They had been handed artifacts — HTML templates, form structures, backend scripts — and taught rituals for assembling them. But ask them a simple, fundamental question: “How exactly does an HTML form field get transmitted to the backend server?” — and you’d get a blank stare. Not just the wrong answer. Something worse: no sense that the question was even worth asking. That layer of understanding had simply never appeared in any of their learning outcomes. It wasn’t on the rubric. It wasn’t on the exam. So to them, it didn’t exist.
This is what formal academic training, at its weakest, produces: people who can navigate a prescribed surface without ever developing the cognitive curiosity to ask what’s underneath it. They can operate the technology. They cannot reason about it. And in a fast-moving industry, that distinction is everything.
It happened again with mobile. Bootcamp kids and indie developers had a five-year head start on app stores, SDKs, and UX patterns before formal curricula caught up.
And now it’s happening with AI.
This isn’t a criticism of colleges. It’s just the honest mechanics of how formal education works. Before a course can be taught, it has to be designed, reviewed, approved, piloted, revised, and approved again. Curriculum committees meet. Accreditation bodies weigh in. The content gets standardized and quality-assured. That process exists for good reasons — but it takes time. Typically a generation of technology time, which in 2026 terms means roughly three to five years of irrelevance baked in before the course even launches.
So while you’re waiting for the college-approved, administratively blessed, “properly structured” version of AI education to arrive — the bias-to-action people are already out there. They’re building the tools you’ll use, writing the conventions you’ll follow, establishing the mental models you’ll absorb. You won’t be the architect of that world. You’ll be a guest in it.
You Don’t Have to Wait
Let’s give credit where it’s due. The five centuries stretching from Gutenberg’s press to the moon landing were an astonishing run. The printing press, the scientific method, the research university, the technical journal, the textbook — these weren’t just inventions. They were a civilization teaching itself how to think at scale, encoding hard-won knowledge into replicable, shareable form. That era gave the modern world most of its “cool stuff,” and we shouldn’t be glib about that inheritance.
But movable type is no longer where it’s at.
Handwritten journaling? Still powerful. There is something irreplaceable about the slow, tactile discipline of putting pen to paper — the way it forces thought to clarify itself, the way it builds a personal record of your inner life. I encourage everyone to keep a journal. That practice belongs in the 21st century as much as it did in the 17th.
When it comes to learning, applying, and developing knowledge in the context of real work, however, something new has arrived that changes the equation entirely: Google NotebookLM.
NotebookLM is not simply a study aid. It’s a cognitive thinking partner — one that you build and shape yourself. Upload your documents, research papers, course materials, process notes, client briefs, technical references — and what you get back is not a generic AI wandering off into the internet. You get a grounded, anchored intelligence that knows your material, stays within your context, and engages with your specific problems.
Think of it as a personal Socratic tutor who has actually read everything you’ve read — and unlike most human colleagues, is genuinely ready to question, synthesize, challenge, and push back at any moment.
That is not a luxury. In 2026, that is table stakes for serious learners.
But here’s where most people stop short: they treat NotebookLM as a one-time learning tool, something you set up for a course and then abandon. That’s like buying a journal, writing three entries, and putting it in a drawer.
The real power emerges when you treat it as a living, evolving workspace — something you return to every day, as you’re doing the job, not just studying for it. As you encounter new business processes, you feed them in. As you develop insights about your customers’ needs and conditions of satisfaction, you record them. As your understanding of a technology deepens, you update the material. NotebookLM grows with you. Over time, it becomes a second brain that holds the continuity of your professional development in a way no textbook ever could.
When connected with tools like Comet Agentic Browser and Perplexity AI, the effect compounds further: you can pull in fresh, verified information from the live web, synthesize it against your existing knowledge base, and evolve your processes in real time — as the technology shifts, as the client requirements change, as the industry moves.
This is why in my training courses, students don’t start with a syllabus. They start by setting up a Google NotebookLM. I share my own notebook with them, giving them direct access to all course materials from day one. But more importantly, I work to instill a deeper idea: this tool should not live on your desktop and gather dust between assignments. It should be open alongside your work, every day — learning with you, recording with you, thinking with you.
The learners who will define the next decade of human-AI work are not sitting passively in lecture halls waiting for institutions to hand them relevance. They’re doing what the early web developers did in 1994: showing up before the rulebook exists, building intuition through direct contact, failing fast, iterating faster.
The tools are here. The question is whether you’re using them — or waiting for someone to put them on a syllabus.
The old school model: Wait for an institution to hand you knowledge. ❌
The 2026 model: Build your own learning environment, with AI as your thinking partner, and start constructing your competence now — before the curriculum committee has even scheduled its first meeting. ✅
The world is being built while you’re reading this. The question is whether you’re watching — or building.
Peter Sigurdson teaches AI literacy, STEM education, and technology fluency. He writes at AI With Peter and has spent thirty years watching industries get disrupted by the people who didn’t wait for permission.
Today’s AI news brings another significant step forward in the frontier-model race: OpenAI has launched GPT-5.4, along with Pro and Thinking variants.
If the recent GPT-5.3 Instant release was about smoother conversations and faster responses, GPT-5.4 appears to push deeper into structured reasoning, workflow automation, and enterprise use cases.
For business teams—especially data analysts, project leads, and marketers—this isn’t just another model number.
It’s another expansion of what AI can do inside real operational workflows.
Let’s unpack what this likely means in practical terms.
What’s New With GPT-5.4
Based on early reports and the trajectory of the GPT-5.x series, this release appears to emphasize three major improvements:
1️⃣ More Reliable Reasoning
The Thinking versions of the model are designed to handle:
Multi-step analysis
Complex decision logic
Structured problem solving
This matters because many business tasks aren’t simple prompts—they require layered thinking across data, context, and constraints.
Example use cases:
Task
What GPT-5.4 Thinking Can Do
Market analysis
Compare competitors, trends, and positioning
Business forecasting
Evaluate multiple scenarios
Product strategy
Model tradeoffs between features and cost
Technical architecture
Break down systems and dependencies
In practice, this moves AI from “answer engine” → “thinking partner.”
2️⃣ Better Context Handling for Projects
Large models continue improving their ability to work with long documents, datasets, and multi-step workflows.
For teams, this means AI can increasingly act like a project assistant that actually understands the full situation.
Examples:
Reviewing project documentation
Summarizing meeting transcripts
Analyzing customer research reports
Extracting insights from large spreadsheets
This is particularly powerful for team leads and analysts juggling multiple information streams.
3️⃣ Faster + More Specialized Variants
The GPT-5.4 family now appears to include multiple operating modes:
Model
Best Use
Instant / Fast models
High-volume tasks, chat, quick insights
Thinking models
Complex reasoning and analysis
Pro models
Maximum performance and deeper tasks
This allows organizations to choose the right tool for the job, rather than using one expensive model for everything.
What Data Analysts Can Do With GPT-5.4
For analysts, AI is becoming a force multiplier for exploratory analysis.
Example Workflow
Step 1 – Upload your dataset or summary
Upload: sales_data_2025.csv
Step 2 – Ask exploratory questions
Prompt:
Identify the top five factors driving revenue growth in this dataset.
Step 3 – Ask for structured outputs
Prompt:
Build a table summarizing: • trend direction • potential causes • confidence level
Step 4 – Generate visual explanations
Prompt:
Suggest three charts that would best explain these trends to executives.
Result
Instead of spending hours manually exploring the data, analysts can:
rapidly generate hypotheses
build presentation-ready summaries
focus on interpretation rather than mechanics
Think of it as an analytical co-pilot sitting next to you.
How Project Leads Can Use GPT-5.4
Project leaders often drown in documentation.
AI can now act as a project synthesis engine.
Example
Upload:
sprint backlog
meeting notes
roadmap document
Prompt:
Summarize the top project risks and identify tasks likely to miss deadlines.
GPT-5.4 can then produce:
risk summaries
timeline conflicts
suggested mitigation strategies
You can even ask:
Create a one-page executive briefing summarizing the project status.
That’s the kind of communication artifact that normally takes hours.
Marketing Teams: AI as a Strategy Partner
Marketing teams benefit from GPT-5.4 in three major areas:
1️⃣ Campaign Planning
Prompt:
Analyze our campaign results from the last 12 months and identify patterns in high-performing messaging.
AI can:
identify content themes
detect seasonality
recommend new campaign angles
2️⃣ Competitive Intelligence
Prompt:
Compare the positioning of our product against the top five competitors.
Humanity is crossing a threshold that doesn’t show up on a calendar the way elections do, or on a skyline the way new towers do—but it’s real.
We are in a species-wide transition.
We are shifting from an era where power came from controlling metal, labor, and logistics into an era where power increasingly comes from controlling attention, models, data, and the quality of decisions.
And that change doesn’t politely arrive at everyone’s doorstep at the same time.
Some people will meet it early—curious, prepared, excited—like apprentices who showed up before the bell because they wanted more time at the bench.
Others will meet it late—confused, defensive, resentful—like someone stumbling into the shop after the belts are already turning, wondering who moved the world while they weren’t looking.
This is not a motivational poster. This is a practical warning and an invitation:
If you want the thrill of being part of this new era—if you want to build, not just watch—then you have to get ahead of the curve.
You have to take the best practices being discovered right now, while the paint is still drying, and project them forward into your own work: your writing, your business, your classrooms, your tools, your craft.
Because AI is not merely “software.”
It’s a force multiplier for whatever a person already is.
If you’re sloppy, it scales sloppiness.
If you’re thoughtful, it scales thought.
If you’re cynical, it scales manipulation.
If you’re principled, it scales integrity.
That’s why the conversation can’t just be about capabilities. It has to be about character.
We need a new class of builders: people who can hold both truths at once—the power is real, and the responsibility is heavier than most people want to admit.
People who won’t drift into techno-authoritarian fantasies, but also won’t cower in fear. People who choose the harder path: disciplined craft, measured judgment, incremental improvement, and a stubborn commitment to human dignity.
Call it the new archetype.
Not the tyrant. Not the passive consumer.
The Technomage.
The advocate who reminds everyone that the point of powerful tools is not to replace human agency—but to amplify it. To cultivate internal strength: attention, discernment, courage, the ability to do hard things, and the willingness to do them right.
And if that sounds dramatic—good.
Every true shift in civilization is dramatic. It just rarely announces itself with trumpets.
So we’re going to make it concrete. We’re going to ground it in a story—because stories are how civilizations carry meaning without dropping it.
We’ll begin with a mechanic in 1859 Pennsylvania, hands blackened by honest work, listening for a wobble in a line shaft. He didn’t have GPUs. He didn’t have PyTorch. But he had the thing that matters most in every era:
craft.
Because the future is not built by hype.
It’s built by people who notice the quarter-turn on a screw— and tighten it before the whole machine starts to shake.
A Pennsylvania Mechanic’s Day” (1859)
A first-person vignette of how our world was built.
They call me a mechanic, which sounds grander than it is. It doesn’t mean I mend carriages—though I can. It means I live where iron meets intention.
I wake before sun-up in 1859 Pennsylvania, because the shop wakes before the town does. The air is still cold enough to bite, and the stove is my first machine of the day: coax the coals, nurse the draft, listen for that first friendly crackle.
By the time the light changes from charcoal to pewter, I’m walking down the lane with my dinner pail and my tools—my proper tools—wrapped in a cloth like a priest carrying relics. There’s comfort in the weight of them. A man can be uncertain about the world, but a sharp file and a true square don’t argue.
The shop smells like yesterday: oil, soot, hot iron, and that sweet tang of fresh-cut wood from the pattern bench. I nod to the boys already at the bellows and to Mr. Haines the foreman, whose moustache looks like it was pressed in a vise overnight.
“Morning, Elias,” he says. “We’ve got a devil of a wobble on the line shaft. And the new feed mechanism for the planer—if we don’t have it by week’s end, we’ll be carving parts by candle.”
He says it like it’s weather. But I hear the real meaning: if the machines don’t behave, men suffer. If the machines don’t behave, the orders don’t ship. If the orders don’t ship, the pay doesn’t come.
So I start where any decent mechanic starts.
1) Listen before you touch
A machine tells the truth if you let it. The line shaft is turning overhead, belts flapping like lazy flags. There’s a subtle rhythm to the wobble—tap…tap…tap—like a shoe with a loose heel.
I don’t reach for a wrench first. I reach for my eyes.
The bearing block looks sound. The shaft looks straight—until I sight it against the far wall. Then I see it: a faint dance, barely a hair’s breadth, but enough to turn a peaceful shop into a factory of mistakes.
“Shut her down,” I call.
The belts slacken. The shop quiets in a way that feels unnatural, like a room holding its breath.
2) Find the smallest wrong thing
Big failures begin as small ones that everyone tolerates.
I chalk a mark at the high point of the shaft and roll it by hand. The mark rises and falls—consistent. That’s good news. Consistent means measurable. Measurable means fixable.
I check the set screws. One gives the slightest turn.
That’s all it was. A screw backed off by a quarter-turn, and the whole shop was paying the tax of vibration—tiny inefficiencies that multiply like rabbits.
I snug it down, then go further: I cut a small shim from scrap brass, fit it into the bearing seat, and reassemble everything with the patience of a man building a pocket watch out of farm tools.
We start her up again.
The wobble is gone.
No cheering. No applause. That’s not how a shop works. But I see Mr. Haines’ shoulders relax a fraction, like someone quietly setting down a heavy crate.
I write it in the log: “Set screw loosened. Shim fitted. Shaft true.” Nothing heroic. Just the truth, trapped in ink so it can’t escape tomorrow.
By mid-morning, the real task begins: the new feed mechanism for the planer. It’s meant to move the workpiece forward in smooth increments, like a steady hand pushing a loaf under a knife. The owner wants it faster. The foreman wants it reliable. The men want it to stop chewing boards like a hungry dog.
I go to my bench and lay out the parts: gears, a ratchet, a pawl that looks like a small metal tooth, and a crank handle that still has casting sand stuck to it like grit under a fingernail.
Here is where “mechanic” becomes less like repair and more like translation.
You translate an idea into steel.
3) Measure twice, but also imagine twice
A drawing is polite fiction. The real world is rude. Metal expands. Belts slip. Wood swells. Men push too hard. Grease gathers like gossip.
I take the gear blank and test it on the arbor. It fits, but it fits too easily. That’s trouble. Too easy means it will chatter. Chatter means uneven feed. Uneven feed means ruined pieces and bad language.
So I do what mechanics always do: I plan for reality, not paper.
I lap the bore just enough to make it snug. Not forced—snug. You want a fit that feels like a handshake, not a wrestling match.
Then I run my thumb along the teeth.
One tooth has a burr. The burr is tiny. That’s how injuries start. Tiny.
I stone it down until it’s clean.
At noon we eat on overturned crates. The boys talk about a baseball match in town and about a cousin headed west where the land is wide and the future is rumor. I listen, but my mind keeps returning to the feed mechanism. It’s not obsession. It’s stewardship.
Machines don’t forgive lazy thinking.
After lunch, we assemble the mechanism onto the planer. The first test is always tense: the moment you learn whether your day was craft—or just motion.
We engage the feed. The board advances.
At first it’s beautiful. Smooth as a sermon.
Then—clack. The pawl slips.
The board lurches forward like a man missing a step on stairs. The cutter bites too deep, and the surface tears.
A groan rises from the bench line. This is the sound of wasted time.
Mr. Haines looks at me.
I don’t defend myself. I don’t blame the casting or the wood or the weather.
4) Treat every failure like a message
I kneel beside the ratchet and watch the pawl as it engages. It’s not slipping because it’s weak. It’s slipping because the angle is wrong. The tooth face is too steep—like trying to hold a wagon with a smooth rock.
So I take the pawl back to my bench and file the face—slow, careful, consistent. You can’t bully a solution out of metal. You persuade it.
I change the spring tension by one notch—just one—and put it back.
Second test.
The board advances.
No clack.
Third test.
Still clean.
I run my hand over the planed surface. It’s the kind of flatness that makes you trust the future.
Mr. Haines nods once, the way a man acknowledges a job done right without wasting words on it.
Late afternoon brings the smaller troubles—the ones that never make their way into speeches about progress, but without which progress is a fairytale.
A belt that squeals because the pulley is glazed. I rough it with sand and oil it proper.
A drill press table that won’t lock because the threads are fouled. I chase them clean.
A vise jaw that’s been struck too many times by an impatient hammer. I true it with file and shame.
Each problem is humble. Each problem is also a lesson:
If you ignore the small wrong things, the world becomes expensive.
When the whistle finally blows, the shop exhales. Men stretch their backs and rub their hands, and the machines settle into silence like tired horses.
I wrap my tools again.
Walking home, I pass a row of new brick houses going up by the rail line. I watch a locomotive roll through, dragging freight like a giant pulling a chain. I think about how many hands it took to make that motion reliable: the men who laid track, the men who forged wheels, the men who shaped valves, the men who turned shafts until they were true.
A person might look at a factory and see smoke and noise.
I see something else.
I see a thousand small problems solved, stacked neatly on top of each other until they become a civilization.
And I think—quietly, because I’m not a poet— that the modern world is not built by speeches.
It’s built by mechanics who notice the quarter-turn on a screw.
The Pennsylvania mechanic archetype isn’t about nostalgia. It’s about a mindset:
Listen first.
Chase the smallest error.
Design for reality, not theory.
Treat failure as data.
Respect the quiet compounding of tiny fixes.
That’s how the industrial world was built—one shim, one burr, one corrected angle at a time.
“The Forge Has Wi-Fi Now”
The new shop doesn’t smell like coal or cutting oil. In metal you hear wobble. In data you see drift.
It smells like burnt espresso, whiteboard markers, and the faint ozone of laptops that have lived too many long nights.
But if you listen—really listen—you can still hear the old line shaft.
Not overhead on belts and pulleys.
It’s in the hum of GPUs in the corner rack, the clicking of mechanical keyboards, the soft whirr of a fan that’s one commit away from giving up.
They call themselves a team, but they move like a crew—a small band of young builders, men and women with that same bright, unreasonable belief that problems exist to be handled.
Their Slack channel is named #the-forge.
Not ironically.
Not even a little.
1) The New Iron Works
Their company is a mid-sized logistics business—real warehouses, real trucks, real customers, real penalties when a late delivery becomes a broken promise.
The old problem is familiar: Too many “small” failures.
Orders get routed wrong because an address was typed weirdly.
Customer emails arrive angry and unread because the inbox is a landfill.
The warehouse system predicts inventory “fine”… until it doesn’t, and then everyone scrambles with clipboards like it’s 1997.
The executive team wants innovation. The builders want something more sacred:
Less chaos. More truth.
They don’t start with moonshots. They start with the wobble.
Because wobble is where cost hides.
2) The Mechanic’s Rule: Listen Before You Touch
Mina—twenty-six, hoodie, hair tied back like she’s about to do surgery—stands at the front with a dashboard on the projector.
She doesn’t say “AI.” Not yet.
She says: “Where are we bleeding time?”
And they do something that sounds boring and is actually revolutionary:
They read logs. They follow error trails. They watch a shipment fail like it’s a lathe cutting the wrong groove.
Theo—ex-gamer, now an ML engineer—calls it instrumentation. Jules—product lead—calls it truth serum. Nadia—ops—calls it finally.
They don’t romanticize it. They measure it.
Because the old mechanic knew: If the shaft wobbles, everything downstream wobbles too.
3) First Fix: The “Burr on the Tooth”
Their first win isn’t a chatbot. It isn’t a flashy demo.
It’s a tiny, humiliating issue:
Address normalization.
The warehouse system treats these as different places:
“St. Clair Ave W”
“Saint Clair Avenue West”
“St Clair Av W.”
“St-Clair Ave West”
That tiny burr creates returns, delays, and overtime shifts.
So they build a small AI service that reads incoming orders, standardizes addresses, flags uncertain ones, and asks for confirmation—quietly, automatically, before the package ever hits the floor.
They don’t call it “AI disruption.”
They call it taking the burr off the gear.
And the next day, for the first time in months, the warehouse lead goes home on time.
No fireworks.
Just a screw tightened a quarter-turn.
4) The New Lathe: Data + PyTorch
After that, the team gets hungry.
They treat the business like a machine shop—every system is a mechanism, every workflow a linkage, every customer complaint a vibration telling you something is misaligned.
They don’t worship models.
They worship results.
Their second project is inventory prediction—but not the grand “forecast the future” nonsense that dies in a slide deck.
They aim smaller:
“Can we reduce stockouts on our top 200 SKUs by 10% in eight weeks?”
That’s the mechanic’s question. Not “Can we build a perfect machine?” But “Can we make this one stop chewing boards?”
Theo opens PyTorch like an old craftsman opening a tool chest.
He doesn’t start training until the data is sane.
missing values cleaned
anomalies labeled
definitions agreed upon
metrics chosen before anyone gets tempted to cheat
They argue about evaluation like machinists arguing tolerances.
Jules insists: “No vanity metrics.” Mina adds: “If ops can’t use it, it doesn’t exist.” Nadia says: “If it breaks at 2 a.m., I’m not carrying your poetry.”
They build a model that’s… fine. Not magical.
But it’s stable.
And stability is the beginning of power.
5) The “Do Right” Doctrine
The vibe in the Forge isn’t “move fast and break things.”
It’s older and sharper:
Move carefully and fix things.
They put guardrails on everything:
Human-in-the-loop approvals where risk is real
Audit trails so you can explain what happened
Rollbacks because pride is not a strategy
Monitoring because reality changes when no one is looking
They treat customer trust like machinists treat alignment:
One careless bump and your workpiece is ruined.
And here’s the part that surprises the execs:
The builders don’t become arrogant.
They become more humble, because AI amplifies consequences.
When you can automate decisions at scale, you can also automate mistakes at scale.
So they slow down in the places that matter.
They build “boring” safety features like craftsmen.
6) The Big Moment That Isn’t a Big Moment
Then comes the project everyone wants: customer support.
Not because they want to replace people.
But because the support staff is drowning in repetitive questions while the hard cases pile up.
They create an internal AI assistant—not customer-facing at first—that drafts replies, summarizes long threads, and pulls relevant policy snippets and order details into one clean view.
But they do it like mechanics:
Start with one support rep
One category of tickets
One measurable goal: “reduce handling time by 15%”
Weekly review
Patch, refine, repeat
Within a month, the support team stops dreading Mondays.
Not because work vanished.
Because the machine stopped fighting them.
7) The Fire of the Forge
On a late Friday, the Forge stays a little later.
Not because they’re forced.
Because they’re building something.
Someone plays an old clip from Hackers: Heroes of the Computer Revolution—the part where the early builders talk like they’re describing a calling, not a career.
Mina leans back and says, half-joking:
“Data has replaced metal.”
Theo grins without looking up from his screen:
“And PyTorch replaced the lathe.”
Nadia points at the monitoring dashboard:
“Yeah, and the forge still burns. It’s just… distributed now.”
They laugh, but it’s true.
The obsession is the same. The pride in craft is the same. The respect for incremental progress is the same.
Only the material has changed.
The 1859 mechanic filed burrs off gears so machines could run true.
These builders file burrs off datasets, edge cases, definitions, and workflows so businesses can run true.
And the future still arrives the same way it always has:
Not in one grand invention.
But in a thousand small improvements done correctly— each one a careful strike of the hammer.
Each one a tighter fit.
Each one a world made slightly more reliable.
The “tech bro” era is over.
The mechanic era is back—only now the workpiece is data, and the tools are models.
Build like a mechanic:
listen first
fix the small wobble
measure what matters
ship safely
improve relentlessly
Because the forge didn’t go away.
It went pervasive—and got Wi-Fi.
End of Day Reflection on those who built before
The office had a way of changing shape after midnight.
In daylight it was all sharp edges and clean optimism—glass walls, neatly aligned monitors, slogans pretending to be philosophy.
But now the lights were dimmed and the world narrowed to a few islands of glow: a terminal window, a log stream, a dashboard breathing in slow green pulses. Somewhere in the building’s bones, a vent clicked like a tired metronome.
Mina sat very still, hands resting on the laptop as if it were warm.
The last test suite had finally gone quiet.
The system didn’t feel finished—nothing ever did—but it felt true enough to carry weight, and that was the only kind of finished she trusted.
Across the room, Theo was asleep with his head on his hoodie, one arm still bent like he was holding an invisible mouse. Nadia had left an hour earlier, but not before she taped a note to Mina’s monitor:
Don’t you dare ship without a rollback plan. —N
Jules’ mug sat abandoned near the whiteboard, its last inch of coffee cooled into something that looked like antique varnish.
Mina stood, stretched, and walked to the window.
The city was quiet in that rare way it only becomes when most people are finally dreaming. Snow traced the edges of the streetlights. A delivery truck moved through the intersection below like a patient animal, headlights steady, route clean.
She watched it go and felt something loosen in her chest.
Not triumph.
Not relief.
Something older.
Something like respect.
Because she knew—without anyone needing to tell her—that this night, this code, this small improvement that would shave minutes off chaos and restore a little dignity to tired people—none of it came out of nowhere.
It came from a lineage.
From a chain of hands and minds passing forward a single stubborn idea:
Make the machine run true.
She thought of the ones who came first.
Not the famous names the world liked to put on posters, but the real progenitors—the ones history kept in the margins because their work didn’t sparkle. It simply held.
She pictured a mechanic in Pennsylvania in 1859, sleeves rolled, eyes narrowed—not seeking glory, only accuracy—feeling for the slightest wobble in a shaft because a wobble was a lie that spread through everything downstream.
She pictured the machinists and millwrights, the pattern-makers and tool grinders, the people who taught their apprentices not just how to cut and fit, but how to notice.
Notice the burr. Notice the misalignment. Notice the quarter-turn that would become a catastrophe if ignored.
She thought of the shop floor ethic: not “move fast,” but do it right. The exactness that looked like stubbornness to anyone who didn’t understand what a small error becomes when multiplied by time.
And then—further along the chain—she thought of the first computer builders, working in rooms that smelled of solder and nervous hope, convinced that thought itself could be shaped into machinery. She thought of the early hackers, not as caricatures, but as craftsmen: obsessed, playful, relentless, building tools because they couldn’t stand not to.
She didn’t romanticize them.
She honored them.
Because she recognized the same fire.
The forge had changed materials, that was all.
Metal had become data. Belts and pulleys had become pipelines. The lathe had become PyTorch, and the new cutting edge was a model learning the shape of a messy world.
But the discipline—the temper of the work—was unchanged.
Mina looked back at the desks, the sleeping teammate, the note about rollback plans, the whiteboard crowded with arrows and crossed-out assumptions.
And she realized something that made her smile, small and private.
They weren’t inventing a new kind of craft.
They were returning to it.
They were just the latest hands in the line.
She rested her palm on the cool glass of the window, as if she could feel that whole lineage through it, like a vibration passing through a long beam.
And in her mind—quietly, without ceremony—she offered thanks.
To the mechanics who insisted that precision mattered.
To the apprentices who learned patience before power.
To the builders who fixed problems no one praised because the reward was simply that the world didn’t break.
To the stubborn minds who kept improving the toolchain so that tonight, a young woman could shape invisible machinery that would make real people’s lives a little less jagged tomorrow.
She didn’t know their names.
But she felt the weight of their work in every line she shipped.
A final green checkmark pulsed on the screen.
Mina turned off her monitor.
In the sudden darkness, the office felt less like a startup and more like what it truly was:
A workshop.
A place where the future was assembled, one careful step at a time.
And as she gathered her coat and headed for the door, she carried with her the same vow the old mechanics had carried—wordless, practical, absolute:
Keep it true. Keep it human. Keep building.
The future will not remember most of what we argued about online.
It will not preserve our hot takes, our trend cycles, our anxious noise.
It will remember what we built—the quiet systems that reduced suffering, the tools that returned time to tired people, the disciplines that kept power from becoming cruelty. And if our era earns a name, it won’t be because our models were large.
It will be because enough of us chose the mechanic’s ethic in a world that begged for shortcuts: listen first, do it right, leave the machine truer than you found it.
🚀 Your Team Is Already Falling Behind — Unless You’re Using These Google Workspace AI Tools Today
By Peter Sigurdson | AI Systems Trainer & Digital Transformation Consultant
Let me ask you something uncomfortable.
Right now, somewhere in your industry — a competitor’s project manager just drafted a full client proposal in 4 minutes.
A sales rep on another team generated a polished video walkthrough for a prospect without ever touching a camera.
A data analyst summarized three months of pipeline data into a boardroom-ready deck while you were still finding the file.
They didn’t work harder. They work smarter — and they’re using tools that are almost certainly already sitting inside your team’s Google Workspace account.
The question isn’t whether AI is changing how work gets done. It already has. The question is: are you leading that change — or discovering it too late?
🗂️ What Is Google Workspace (And What’s Inside It)?
Google Workspace is Google’s integrated cloud productivity suite, used by over 10 million businesses worldwide.
It includes Gmail, Drive, Docs, Sheets, Slides, Meet, Calendar, Contacts, Google Vids, NotebookLM, and the Gemini AI assistant — all connected under one login.[workspace.google][youtube]
The game-changer: as of early 2025, Gemini AI is now included in all Business and Enterprise plans at no extra cost — no add-on required.
-> That means your team likely already has access to one of the most powerful AI productivity ecosystems on the planet.[workspace.google]
You access it simply: open Gmail, look for the “Discover Google Workspace” panel on the right sidebar, and click through to Recommendations. Everything is surfaced for you — your apps, your AI tools, your next steps.[workspace.google]
🔥 5 Power Use Cases That Will 10x Your Team’s Output
These aren’t theoretical. These are capabilities available today, inside tools your organization is likely already paying for.
Use Case #1 — For Team Leaders: AI-Powered Meeting Intelligence in Google Meet + Docs
The old way: Sit through a 60-minute meeting, take manual notes, email a follow-up, watch action items disappear.
The new way: Gemini automatically takes meeting notes in Google Meet, generates summaries, and populates action items directly into a shared Google Doc — in real time. Team leads using this report recovering 5–8 hours per week in meeting overhead.[workspaceupdates.googleblog]
The leaders who reclaim their calendar first will be the ones building the next quarter’s competitive advantage.
Use Case #2 — For Data Analysts: Instant Insights from Google Sheets with Gemini
The old way: Hours of formula-building, pivot tables, manual commentary.
The new way: Type a plain-language prompt into Sheets — “Summarize our Q1 sales performance by region and flag anomalies” — and Gemini generates analysis, charts, and narrative summaries in seconds. It can then cross-reference a Docs report or push findings directly into a Slides deck.[fes]
One prompt. One minute. Boardroom-ready.
Use Case #3 — For Project Managers: Deep Research + NotebookLM for Smarter Decision-Making
The old way: Hours of browser tabs, scattered research, synthesizing information manually.
The new way:NotebookLM (now included with Workspace Business plans) lets you upload your project briefs, RFPs, market reports, and meeting transcripts — then ask questions across all of them simultaneously.
Paired with Gemini Deep Research powered by Gemini 2.5 Pro, you can generate multi-page strategic research reports in minutes.[youtube][workspace.google]
Project managers using this have collapsed pre-planning research cycles from days to hours.
Use Case #4 — For Business Development: AI Avatars & Google Vids for Scalable Client Outreach
The old way: Record yourself, edit video, re-record when you stumble, spend half a day on one 3-minute walkthrough.
The new way:Google Vids, powered by Veo 3.1, lets your team generate professional-grade video content using AI avatars — realistic, expressive, with smooth lip-syncing — without a camera, studio, or production schedule.
-> Write a script, pick an avatar, and your video is done.workspace.google+1
Use it for prospect walkthroughs, onboarding videos, partnership pitches, and customer support — at scale. Your business development team can produce in a day what used to take a week.[workspaceupdates.googleblog]
While you’re scheduling a video shoot, your competitor just sent their fifth personalized video proposal this morning.
Use Case #5 — For Client-Facing Roles: Gemini in Gmail for Personalized, High-Converting Communication
The old way: Stare at a blank reply box, craft emails from scratch, lose track of threads.
The new way: Gemini in Gmail now learns your writing style and generates personalized smart replies, follow-up sequences, and even appointment scheduling widgets — directly in your inbox. It can bulk-archive, prioritize threads, and surface what actually needs your attention today.[youtube]
Sales and client success teams using Gemini in Gmail report cutting email management time by over 60% — without sacrificing the personal touch that closes deals.
📍 How to Access All of This Right Now
You don’t need a new subscription or a new vendor.
Here’s how to unlock what’s already in your account:
Open Gmail → look for the right-side panel → click “Discover Google Workspace”
Go to Recommendations — Google surfaces your 35+ personalized tips based on your role and usage
Navigate to Your Apps to see every tool available in your plan
For Gemini AI: look for the ✨ sparkle icon inside any Google app — Docs, Sheets, Slides, Meet, or Gmail
Most organizations are sitting on a goldmine of AI productivity capability and using maybe 10% of it.
The gap between teams that are actively leveraging these tools and those that aren’t is widening — and it is widening fast.cloudm+1
The managers, analysts, and business development professionals who master this stack now will not just save time.
They will redefine what “done” looks like for their entire organization — and their peers will be scrambling to catch up.
The tools are there.
The access is there.
What’s needed is a guide.
If your team is ready to move from awareness to actual implementation — building workflows, upskilling your people, and deploying Google Workspace AI in a way that actually sticks — let’s talk.
I consult with business teams to identify the highest-leverage AI opportunities in their existing tool stack and turn them into measurable productivity wins.
📩 Drop a comment, send a DM, or connect with me here on LinkedIn.
Because the most expensive thing your organization can do right now is wait.
Here we learn how to construct a hardware lab project that uses modern technology to reconstruct the IBM 1130, a groundbreaking 1960s computing system. The text explores the machine’s historical importance as an affordable, desk-sized solution for technical and business sectors. It outlines a structured educational curriculum where students use Raspberry Pi and Arduino to emulate the 1130’s core functions. Through these staged labs, learners recreate bootstrapping processes, minimal operating systems, and virtual CPU architectures. The project bridges computing history with hands-on engineering by mirroring vintage input/output methods and batch-processing workflows. Ultimately, the materials provide a comprehensive guide for building a functional replica of a mid-century computer.
By studying and working through this Lab Book, you will start to understand the underlying mechanisms of how today’s computer systems are put together, and what needs each component addresses.
When IBM announced the IBM 1130 in 1965, it was doing something radical for its time: selling a desk‑sized computer that a small engineering firm, school, or department could actually afford. The 1130 was the first IBM computer to rent for under $1,000 per month and could be purchased outright for a little over $30,000, which opened computing to organizations that previously relied on punched‑card accounting machines or occasional access to a distant mainframe.wikipedia.nucleos+2
Socially and economically, the mid‑1960s were a moment when engineering, science, and business were all becoming more computationally intensive: civil engineers wanted stress analysis, chemists needed numerical methods, and small businesses wanted payroll and inventory automation.
The 1130 was pitched directly at this space—price‑sensitive but compute‑hungry technical and commercial users who needed a machine in their own building, under their own control.ibm+2
Key wins and use cases
IBM packaged the 1130 not just as hardware, but as a ready‑to‑run solution:
Business applications: route accounting for dairies and bakeries, small‑business payroll, project planning and control.computerhistory+2
Education and research: universities and school boards used it as a teaching and research machine, often giving students direct hands‑on access—something rare before microcomputers.tnmoc+2
By 1967 IBM had shipped more than 10,000 systems, along with hundreds of prewritten application programs and commercial subroutine packages that schools and businesses could adapt to their needs.tnmoc+1
2. What a practical 1960s business computer needed
The 1130 shows that a “real” computer is not just a CPU. To be useful in a business, it must cover several engineering needs:
A way to boot from “dead iron.”
An operating system to supervise work.
Persistent storage for system and data.
Input/output devices for people and the outside world.
A way to create, store, and run custom software.
We’ll look at how the 1130 satisfied each of these, then mirror the same stack using Raspberry Pi and Arduino so students can build a minimal “1130‑like” system in the lab.
3. Bootstrapping: from blank memory to a running system
How the IBM 1130 did it
On power‑up, the 1130 had no ROM BIOS like later microcomputers; core memory was empty, and the only intelligence was in the CPU’s hardwired micro‑operations and the front‑panel controls. The operator loaded a single bootstrap card in the reader and pressed Program Load, which caused one card’s worth of binary to be read directly into low memory and then executed.wikipedia+1
That tiny program’s job was to read the first sectors of the system disk into memory, bringing in a slightly larger loader, which in turn pulled in the resident parts of the Disk Monitor System (DMS/2). Installing DMS on a blank disk (“cold start”) was a ritual of feeding special card decks that built the system image onto the pack before normal work could begin.ibm1130+2
Teaching take‑aways
Bootstraps are tiny because they must fit in minimal memory and be read in a single operation.
Multi‑stage boot (card → disk sector 0 → monitor) is a necessity when you have little core and no firmware.
The operator—and the card reader—are literally in the boot path.
How we’ll reconstruct this with Raspberry Pi
On Raspberry Pi, we can emulate the same idea in a modern way:
The Pi’s GPU firmware loads a file like kernel8.img from a FAT partition on the SD card and jumps into it; that file will be our bare‑metal kernel.rpi4os+1
Our kernel will:
Initialize the stack and zero memory.
Initialize UART so we have a text console.
Print a banner: “1130‑Pi Monitor – Ready.”
Drop into a simple command loop that waits for typed commands.
From the student’s perspective, they flash an SD card, power on the Pi, and instead of Linux they see a minimalist monitor prompt—just like an 1130 operator seeing the console typewriter come alive.
Stage 1 lab snapshot
Learning objectives:
Understand that “boot” is just “load code into memory and jump to it.”
See the difference between firmware‑provided boot and our own kernel.
What students do:
Flash a provided kernel8.img.
Power on and see the monitor prompt.
Modify the banner string, rebuild, and re‑flash to prove they control the earliest software.
4. The operating system as job supervisor
How the IBM 1130 did it
The 1130’s main OS, Disk Monitor System Version 2 (DMS/2), was a small, single‑task, batch‑oriented system. It assumed at least 4K words of core and one integrated 2310 disk drive, but it kept only a Skeleton Supervisor resident; compilers, linkers, utilities, and many device handlers were loaded as needed and discarded afterward.bitsavers+1
DMS/2’s job was to:
Read job control cards and decide which program to load next.
Manage disk areas for system, user, and temporary data.
Provide a standard way to compile, link, and run user programs.
It looked more like a “traffic cop” than a modern multitasking OS.
How we’ll reconstruct this with Raspberry Pi
Our Pi kernel becomes a tiny 1960s‑style monitor:
Resident core (“Skeleton Supervisor”):
Command loop over serial: simple commands like LOAD, RUN, DUMP, POKE, LIST.
A fixed memory map for the emulated CPU: code region, data region, I/O region.
Transient components:
A loader routine that can read a binary program file from SD into memory.
Optionally, a simple interpreter for a “job control script” that chains multiple programs.
We’ll implement a software CPU—a minimal 8‑ or 16‑bit architecture—inside the Pi monitor. The monitor will load a user program into a memory array and then step it instruction by instruction, just as the 1130 ran machine instructions from core.
Stage 2 lab snapshot
Learning objectives:
Distinguish the always‑resident OS core from loadable modules.
Understand how a monitor loop parses commands and calls internal routines.
What students see:
A text prompt such as MON> on their terminal.
Commands like DUMP 0000 003F printing memory contents.
POKE 0010 42 writing a value into emulated RAM.
5. Persistent storage and job “decks”
How the IBM 1130 did it
The standard 1130 disk subsystem used removable 2315 cartridges with about 512,000 16‑bit words (roughly 1 MB) organized into fixed‑size sectors. DMS divided the disk into system, user, and work areas and used simple allocation schemes rather than complex hierarchical file systems.bitsavers.informatik.uni-stuttgart+2
Jobs arrived as card decks. A deck mixed:
Control cards (// records) telling the monitor what to do: which compiler to use, what data sets to read, where to send output.ibm1130+1
Program source cards.
Optional data cards.
The disk stored the OS, compilers, libraries, and users’ object programs; decks were the scripts that orchestrated compilation and execution.
How we’ll reconstruct this with Raspberry Pi
We’ll keep the feel but modernize the medium:
Use the Pi’s SD card as “disk.”
Define a simple binary program format:
2‑byte length.
2‑byte starting address.
N bytes of machine code for our toy CPU.
Provide a tiny “assembler” script (Python on a laptop) that turns text instructions into this binary format.
Place these .bin files in a known directory on the SD card.
In the monitor, students will type:
LOAD ADD2 – monitor opens ADD2.BIN, loads it into the emulated RAM, and remembers its entry address.
RUN – monitor starts the emulated CPU at that address.
You can go further and define a job script file (a text file with commands like LOAD ADD2; RUN; LOAD AVG; RUN) to echo the idea of control cards driving a batch of work.
Stage 3 lab snapshot
Learning objectives:
See how “files on disk” and “program in memory” relate.
Understand that a “job” is a repeatable script that the OS interprets.
What students see:
Successful program loads reported by the monitor.
Errors like “file not found” that mirror “deck missing” or “incorrect job card.”
6. CPU and memory: living inside tiny spaces
How the IBM 1130 did it
The 1131 processor used magnetic core memory in configurations from 4K to 32K 16‑bit words, with cycle times around 2.2–3.6 microseconds depending on model. Its architecture included an accumulator, an extension register, index registers, and a 16‑bit instruction format with short and long addressing modes to help compilers squeeze code into limited space.bitsavers+1
Language choices and OS strategies reflected these limits: Fortran dominated because it mapped well to numeric work in constrained memory, and DMS relied on overlays and transient modules so that user jobs could occupy most of core.wikipedia+1
How we’ll reconstruct this with Raspberry Pi
Inside our monitor, we define a very small virtual machine:
Registers:
ACC (accumulator)
PC (program counter)
maybe R1–R3 as general registers.
Memory:
A fixed‑size array, say 256 bytes or 1K bytes, so limits are visible.
Instruction set (examples):
LDA addr – load accumulator.
STA addr – store accumulator.
ADD addr – add memory word to accumulator.
JZ addr – jump if accumulator zero.
IN port / OUT port – simple I/O.
Students assemble programs in this instruction set, see how many bytes their programs consume, and hit the edge of memory if they grow too large—recreating the feel of working within 4K or 8K words.
7. Input/Output: connecting to the world
How the IBM 1130 did it
The 1130 shipped with a modified IBM Selectric console printer/keyboard, card readers and punches, line printers, plotters, and optional communications adapters. Many of these devices relied heavily on the CPU; the 1130 used Execute I/O (XIO) instructions and interrupts to start operations and check device status, and device handlers were small programs that the monitor loaded as needed.ibm1130+3
Business value came from these devices: card readers fed payroll data, printers produced reports, plotters generated engineering graphs.
How we’ll reconstruct this with Raspberry Pi and Arduino
We split responsibilities:
Console:
Use the Pi’s UART and a serial terminal as the “console typewriter” for monitor commands and text output.github+1
“Card reader” and simple data input:
Attach an Arduino over USB/serial.
Give it buttons or a numeric keypad; each button combination represents a “card word.”
When students press “FEED,” the Arduino sends a byte to the Pi, which the monitor stores into a buffer—this could be numbers to add, or sensor readings.
“Printer” and output:
Either use the serial terminal as the line printer, or connect a cheap thermal printer module to the Arduino and send it text lines.
Sensors and actuators:
Arduino exposes LEDs, pushbuttons, and perhaps a light or temperature sensor.
The toy CPU implements IN port / OUT port instructions that the monitor maps to serial messages exchanged with Arduino, e.g. “read sensor 1”, “set LED 3 on.”
Students can now write programs such as:
“Read two numbers from ‘cards’ (Arduino), add them, and print the result.”
“If light level > threshold, set an LED and print ‘ALERT’.”
These mirror both the 1130’s numeric workloads and later control‑system use cases.
8. Creating and running software: from Fortran decks to toy assembly
How the IBM 1130 did it
Most 1130 users worked in Fortran, though IBM also shipped assemblers and other languages like COBOL, BASIC, and APL. Source code was written on coding forms, keypunched into cards, and submitted as a deck that included control cards (to tell DMS which compiler to invoke) followed by the source.wikipedia.nucleos+2
The compilation flow looked like:
DMS reads job control cards, sees a FORTRAN job.
It loads the Fortran compiler from disk as a transient program.
The compiler reads source cards, writes object code to disk.
DMS then loads the core‑load builder/loader, which links the object with runtime libraries.
Finally, the user program runs, writing results to printer, cards, or disk.ibm1130+1
How we’ll reconstruct this with Raspberry Pi
We simplify but keep the same phases:
Source: students write programs in your tiny assembly language, using mnemonics close to the 1130 style if you like (e.g., LD, STO, ADD, B, BZ).
Assembler: a Python script (run on a laptop or even on the Pi) translates text into machine code bytes according to your toy CPU spec.electronicsforu+1
Job submission: the assembled .bin is copied to the SD card’s PROGRAMS directory.
Execution: in the monitor, students type LOAD NAME, then RUN.
Optional advanced step: allow a tiny “control script” format:
textJOB ADDTEST
LOAD ADD2
RUN
LOAD AVG3
RUN
END
and let the monitor interpret it, echoing DMS job control language.
9. Layered lab plan: building the “minimum viable 1130”
Here is a concise three‑stage lab sequence that integrates the narrative above. You can expand each stage into multiple sessions.
Stage 1 – Bare‑metal monitor (bootstrapping and resident core)
Learning objectives
See what happens during boot on real hardware.
Understand that a minimal OS is just code that initializes hardware and then loops reading commands.
Use POKE and DUMP to change memory and verify results.
Stage 2 – Virtual CPU and loader (job pipeline and memory limits)
Learning objectives
Understand a very small CPU model (registers, instructions, program counter).
See how programs move from disk into memory and then execute.
Technical tasks
Inside the monitor, implement:
A small memory array (e.g., 256 bytes).
Registers ACC, PC.
Instructions: LDA, STA, ADD, SUB, JMP, JZ, HLT.
Implement LOAD name to read name.BIN and load it into the memory array.
Implement RUN to set PC and step instructions in a loop, printing results or halting.
Student experience
Assemble and run a program that adds two constants and prints the sum.
Modify the program to read operands from “memory locations” and see how that maps to addresses on paper.
Stage 3 – I/O and “card decks” (devices, jobs, and external world)
Learning objectives
Experience devices as separate entities with simple protocols.
Connect programs to external buttons, LEDs, and basic sensors.
Technical tasks
Connect Arduino to Pi over serial.
Define simple messages, e.g.:
R0 – read “card” value / sensor 0; Arduino replies with a byte.
W0x – write output port 0 with value x (e.g., control an LED).
Add CPU instructions IN port and OUT port that call these messages via monitor helpers.
Optionally define a text “job script” format and a monitor command SUBMIT scriptname.
Student experience
Write a program that:
Uses IN twice to read two card/sensor values.
Adds them.
Prints the sum and sets an LED if the sum exceeds a threshold.
Run a batch script that executes multiple programs in sequence, mirroring deck‑based workflows.
10. Bringing it full circle in the classroom
By the end of this journey, students will have:
Seen the historical context: why the IBM 1130 existed, who used it, and what “affordable computing” meant in the 1960s.hoc.lgfl.org+3
Understood the core components of any practical computer system:
Boot path from dead hardware.
Minimal operating system / monitor.
Persistent storage and simple file/job structures.
CPU and memory model.
I/O devices and drivers.
A workflow for creating and running custom programs.
Built a minimal working replica of those ideas on Raspberry Pi and Arduino:
A bare‑metal monitor serving as the OS core.github+2
A tiny virtual CPU that executes student‑written machine code.eater+1
Simple “decks” (binary files and optional control scripts) stored on SD card.
Basic peripherals (console, pseudo card reader, LEDs, sensors) that bring the outside world into play.wyliodrin+2
The result is a single, integrated experience: history lecture, system‑design seminar, and hands‑on lab, all anchored on one modest mid‑1960s machine and its 21st‑century classroom cousin.
Shopping list for the Raspberry Pi + Arduino “mini-1130” simulation lab (monitor OS + virtual CPU on the Pi, Arduino as I/O controller, simple “card reader / status lights / sensors” peripherals).
A. Core compute + storage
Raspberry Pi (runs monitor OS + virtual CPU)
Option 1 (recommended): Raspberry Pi 4 Model B (2GB) — $55 from PiShop.us (Pishop)
Option 2 (small/cheap): Raspberry Pi Zero 2 W — example listing $34.99 at Walmart (pre-soldered header variant shown) (Walmart.com)
(Zero 2 W is great if your virtual CPU + monitor is light and you’re mostly doing serial + SD.)
microSD card (holds OS + “job scripts” + programs)
Official Raspberry Pi microSD 32GB with Raspberry Pi OS preinstalled — $19.95 at PiShop.us (Pishop)
Power supply
If using Pi 4: Official-style 15W USB-C PSU commonly sold around $8.80 at PiShop.us (Pishop)
If using Pi Zero 2 W: Micro-USB PSU also listed around $8.80 at PiShop.us (Pishop)
B. I/O controller + “console” link
Arduino board (I/O device controller)
Arduino UNO R4 Minima — $20.00 at Digi-Key (DigiKey)
(Also listed €22 at Arduino’s official store, if you’re buying via EU pricing) (Arduino Official Store)
Pi ↔ Arduino link
Easiest: USB A ↔ USB-C cable (for UNO R4) and plug Arduino into the Pi (or teacher laptop).
If you want the “classic serial console” feel (LOAD/RUN/DUMP over serial):
USB-to-TTL serial console cable (3.3V logic) — $9.95 at Digi-Key (DigiKey)
(Use this for a “student terminal” serial console path into the Pi/monitor.)
C. “Peripherals” that match the 1960s vibe
1) “Card reader / numeric input”
Pick one:
4×4 membrane keypad (cheap and very “operator console”) — $3.95 at PiShop.ca (PiShop)
Or: use a USB keypad / old numeric keypad as a “keypunch substitute” (varies widely in price).
2) “Status lights / output ports”
LED assortment + resistors
You can go fancy (name-brand kit): Adafruit assorted resistor/capacitor kit shown as $39.95 on Digi-Key (DigiKey)
Or go cheap: generic resistor kits are commonly ~$8–$15 (Amazon/eBay/etc.).
Optional: small buzzer (audible “console bell”), and/or a 7-segment display for simple output.
If you’re a Business Analyst, Project Manager, or Senior Leader, Excel is still the operating system of decision-making: budgets, forecasts, KPI packs, pipeline reports, benefits cases, delivery plans, board decks.
Now Anthropic’s Claude can sit inside Excel as an add-in—and it’s designed to work with complex, multi-tab workbooks (the real world, not demo sheets). Microsoft and every major AI vendor are racing into this space. The question isn’t “Will AI change spreadsheet work?” The question is:
Will you be the person who can direct AI in Excel… or the person being replaced by someone who can?
What is “Claude in Excel” (aka “Claude by Anthropic in Excel”)?
Claude in Excel is an official Excel add-in that opens as a sidebar and can read and edit the currently open workbook—including multi-sheet dependencies—while explaining its answers with cell-level citations you can click. (Microsoft Marketplace)
It is currently available for Claude Pro, Max, Team, and Enterprise plans. (Microsoft Marketplace)
Claude in Excel launched as a research preview in October 2025 and later expanded access to Pro plan users (with public reporting around January 2026). (The Economic Times)
Why you need to know this to still have a job next year
Let’s talk about the fears people don’t say out loud:
“AI will replace analysts.”
Not exactly. It will replace analysts who can’t work with AI. The new baseline expectation is shifting from:
“I can build and maintain spreadsheets” to:
“I can interrogate, validate, and steer AI-assisted spreadsheet work safely.”
“I’ll lose control of the numbers.”
That’s a valid worry. Claude can make changes, but its value is in transparent reasoning + change tracking + citations—so you can review what it did and why. (Microsoft Marketplace)
“My team will leak sensitive data.”
Also valid. Microsoft’s marketplace listing is explicit: the add-in can read and change your document and can send data over the Internet. That’s not “evil”—it’s how cloud AI works—but it does change your governance posture. (Microsoft Marketplace)
“What if it gets manipulated?”
Anthropic publicly warns about prompt injection risks in spreadsheets—hidden instructions embedded in cells, formulas, comments, etc., that could trick an AI tool into taking unintended actions (including leaking information). They advise using the add-in only with trusted spreadsheets and carefully reviewing behavior/changes. (Claude Help Center)
If you lead teams: this is where policy, training, and guardrails matter.
What Claude in Excel can do (the practical stuff)
From Anthropic’s own materials, the add-in is built to:
Explain calculations with cell-level citations (click to jump to referenced cells). (Microsoft Marketplace)
Safely update assumptions while preserving formula dependencies. (Microsoft Marketplace)
Support “native Excel operations” like:
pivot table editing
chart editing
conditional formatting
sort & filter
data validation (dropdowns/restrictions)
and some “finance formatting” basics like gridlines/print areas (Claude Help Center)
This is the difference between “AI suggests a formula” and “AI behaves like a spreadsheet collaborator.”
The career shift: from spreadsheet builder → decision architect
Here’s what I want every BA / PM / leader to internalize:
The new advantage is not typing faster.
It’s asking better questions, faster:
“What assumptions drive this model’s outputs the most?”
“Where are the fragile links and circular risks?”
“What changed since last month, and why?”
“What scenario breaks the business case?”
Claude in Excel is a force multiplier for people who can:
Frame the problem
Validate the answer
Communicate the story
If you can do those three, you become more valuable—not less.
Prompts you should practice (steal these)
For Business Analysts
“Summarize the logic of this workbook and cite the cells that drive the final KPI.”
“Find the top 10 drivers of variance between Actuals and Forecast, and show the supporting ranges.”
“Identify anomalies/outliers in this dataset and propose 3 explanations.”
For Project Managers
“Turn this delivery plan tab into a risks/issues summary with dates, owners, and the critical path.”
“Find dependencies across tabs that could impact the launch date and highlight them.”
“Create a one-page status narrative from this workbook: what changed, what’s at risk, what decisions are needed.”
For Senior Leaders
“Give me the 5 assumptions I should challenge before approving this investment—cite the cells.”
“Stress test: what happens to margin if revenue drops 8% and COGS rises 4%? Show the impacted outputs.”
“Where do the numbers look ‘too clean’ or inconsistent across tabs?”
The safety reality: how to use it without getting burned
If you remember nothing else, remember this:
1) Treat Excel files like they can carry “instructions”
Anthropic explicitly warns that prompt injection can be hidden in spreadsheet content and may trick the AI into unintended actions. (Claude Help Center) Rule: Don’t use AI add-ins with untrusted templates, vendor files, random downloads, or unknown shared docs.
2) Assume data can leave your environment
The marketplace listing states the add-in can send data over the internet. (Microsoft Marketplace) Rule: Have a “what data is allowed” policy (and enforce it).
3) Require review for every change
Use AI to accelerate analysis and edits, but humans own the final numbers.
If you’re a leader, this becomes a training + governance issue, not just a “cool tool.”
The bottom line
Claude in Excel represents a bigger shift than “another chatbot.” It’s AI moving into the exact place where:
decisions get made,
budgets get set,
forecasts get defended,
and careers get judged.
If you want to still be employable next year in analysis-heavy roles, your job is to become:
AI-literate
verification-driven
security-aware
decision-focused
Because the person who can drive AI in Excel will outpace the person who can only do Excel.
“Gold standard” prompts for BA / PM work (copy/paste)
These are written to produce auditable outputs: scoped, structured, with explicit assumptions, and verification steps.
1) Workbook / dataset orientation (first 5 minutes on any file)
Prompt:
You are my analysis copilot. First, do not edit anything.
Map this workbook: list tabs, what each tab contains, and the flow from inputs → outputs.
Identify the “decision outputs” (final KPIs / totals / charts) and cite the exact cells/ranges that drive them.
List the top 10 risk points (hard-coded numbers, hidden assumptions, inconsistent filters, broken links). Output as: Workbook Map / Key Outputs / Risks / What I should check first.
2) BA: Variance & drivers (Actual vs Forecast)
Prompt:
Compare Actuals vs Forecast for the latest period.
Give me the top 5 drivers of variance (ranked by impact).
For each driver: show the supporting cells/ranges, explain the logic in plain English, and propose 2 hypotheses for “why.”
Then propose 3 actions (what to investigate next) with the exact tabs/cells to review. Output as a table: Driver | $ impact | Evidence (cells/ranges) | Explanation | Hypotheses | Next checks.
3) BA: Sensitivity & “what breaks first”
Prompt:
Identify the 5 most sensitive assumptions in this model. For each, run a simple stress test (e.g., -5%, -10% for revenue; +2%, +5% for cost items) and report how the key outputs change. Do not change formulas—only change assumptions in designated input cells, and show me a preview of edits before applying. Output: Assumption | Input cell | Scenarios | Output impact | Interpretation | Guardrails.
4) BA: Data quality & reconciliation
Prompt:
Audit this dataset for quality issues: duplicates, missing values, inconsistent categories, date problems, outliers. Provide:
A summary of issues found (count + where),
A proposed cleaning plan,
A cleaned version approach that preserves original data (new tab),
A reconciliation check to prove totals match after cleaning. Output: Findings / Cleaning steps / Validation checks.
5) PM: Status from a plan (turn Excel into leadership language)
Prompt:
Using the plan tabs, produce a status update for senior stakeholders:
What changed since last update (scope/date/resource),
Current health (RAG),
Top 5 risks (probability, impact, mitigation, owner, due date),
Decisions needed this week,
Next 7 days of critical actions. Cite supporting cells/ranges for dates/owners/risks. Output: 1-page narrative + risk register table.
6) PM: Critical path + dependency traps
Prompt:
Identify the critical path and the top 10 dependencies that could slip the final date. Flag: missing owners, unrealistic durations, dependencies that span teams, and tasks with no slack. Suggest 3 schedule recovery options with tradeoffs (cost/scope/time). Output: Critical path list + Dependency risk table + Recovery options.
7) PM/BA: Stakeholder-ready “decision memo”
Prompt:
Create a decision memo using only what’s in this workbook:
Decision statement (1 sentence)
Options (at least 3)
Cost/benefit summary
Risks & mitigations
Recommendation + assumptions
What evidence supports this (cite cells/ranges) Keep it concise and executive-ready.
Show a preview of all edits (cell address + before/after).
Highlight edited cells.
Then wait for approval.
10) Universal: Build my reusable template
Prompt:
Turn this workbook into a reusable template:
Create an Inputs tab with clearly labeled assumptions (with data validation/dropdowns where appropriate).
Add a Summary tab with the 6–10 KPIs leaders care about.
Add a Checks tab (reconciliation, outlier flags, broken link checks). Explain the design choices and how to maintain it.
Leading ahead of the Curve: The New Knowledge Portal – NotebookLM at work (personal learning + knowledge management)
NotebookLM is basically “your sources become your private, citeable knowledge base.” It answers questions grounded in what you upload, with citations that jump you to the underlying text. (Google Help)
Why this matters for BAs/PMs/Senior Leaders
You stop hunting through Drive folders, decks, and PRDs.
You get fast, source-grounded answers (less hallucination risk because it’s constrained to your materials). (Google Help)
You can turn messy docs into usable artifacts: briefings, FAQs, study guides, and audio overviews. (blog.google)
NotebookLM supports lots of work formats (Docs, Slides, PDFs, web URLs, pasted text, public YouTube URLs, and audio files). (Google Workspace)
Workflow: set up NotebookLM for “work intelligence” (step-by-step)
Step 1 — Create notebooks by outcome, not by topic
Start with the documents you always end up re-reading:
Latest charter / PRD / scope doc
Status deck(s)
RAID log (or issue tracker exports)
KPI definitions + metric logic doc
Last 2–3 steering committee readouts
NotebookLM uses your sources for answers and provides citations so you can check accuracy. (Google Help)
Step 3 — Establish your “standard questions” (save as pinned notes)
NotebookLM doesn’t keep a full chat history the same way a chatbot does, so treat it like a knowledge workbench: ask key questions, then pin or save outputs as notes. (NotebookLM also emphasizes saving/pinning notes for reuse.) (Google Workspace)
Pin these 6 starter questions:
“What are the goals, and what does success mean?”
“What assumptions are we making?”
“What changed since last month, and why?”
“Top risks + mitigations + owners + due dates?”
“What decisions are needed and by when?”
“What’s the executive narrative in 10 sentences?”
Step 4 — Generate briefing assets (turn sources into artifacts)
Use NotebookLM to generate:
Briefing doc for leadership
FAQ for stakeholders
Study guide for onboarding new team members
Audio Overview for “learn while commuting” Audio Overviews exist specifically to turn your sources into a “deep dive” discussion. (blog.google)
Step 5 — Keep it current with a weekly refresh habit (10 minutes)
Every Friday:
Add the latest status deck or weekly report
Add any new decision log entries
Ask: “What changed this week vs last week? What’s at risk now?”
Step 6 — Share safely (team knowledge without chaos)
NotebookLM notebooks can be shared (permissions matter; keep sensitive notebooks restricted). Workspace updates emphasize that users can control notebook access and that Workspace sourcing respects existing permissions. (Workspace Updates Blog)
If you’re evangelizing internally, these are the lines leaders care about:
Google’s Workspace updates state uploads/queries/responses aren’t used to train models and aren’t reviewed by humans for product improvement without permission, and data stays inside the organization’s trust boundary. (Workspace Updates Blog)
Admins can manage NotebookLM as a Workspace service and apply Context-Aware Access policies (identity/device/location-based controls). (Workspace Updates Blog)
A combined “Excel + NotebookLM” power workflow (how pros will work in 2026)
NotebookLM = your source of truth Load: KPI definitions, business rules, process docs, governance rules.
Excel/Claude/Copilot = execution surface Build the model, run scenarios, produce outputs.
Back to NotebookLM = narrative + alignment Generate a leader briefing grounded in your actual docs, not your memory.
That combination is how you scale “one smart person” into “a repeatable operating system.”
Power BI becomes the next stop in the value chain after Claude-enabled Excel: Excel stays the place where humans + AI do fast modeling and shaping, and Power BI becomes the place where you productize it (governed metrics, refresh, distribution, security).
Here’s how they fit together—plus the risks that matter.
The simplest mental model
Claude in Excel = analysis & transformation cockpit
Great for: exploring, cleaning, reconciling, building model logic, scenario work, explaining spreadsheets, creating “first draft” insights.
Power BI = trusted reporting product
Great for: semantic model (metrics definitions), scheduled refresh, row-level security, certified datasets, distribution, auditability.
If you’re a BA/PM/leader: Excel is the lab. Power BI is the factory.
Where Power BI fits in a Claude-enabled Excel workflow
1) Data shaping: Claude helps you get to “Power BI-ready”
Power BI hates messy spreadsheets (merged cells, inconsistent headers, “totals” rows, multiple header lines, random notes).
Claude’s best contribution here is getting your workbook to:
clean tables with consistent column names/types
no mixed grains (e.g., daily + monthly together)
explicit keys (AccountID, ProjectID, Month)
clear “Inputs / Calc / Output” separation
reconciliation checks (so Power BI totals match Excel totals)
Gold prompt for this step:
Convert this workbook into Power BI-ready tables. Create (or identify) clean tabular ranges, remove presentation artifacts, standardize headers, and produce a “Data Dictionary” tab with column definitions and grain. Do not destroy the existing model—create new tabs.
2) Metric governance: move from spreadsheet logic to a semantic layer
Claude can help you surface and document metric definitions:
“What drives EBITDA in this workbook?”
“Where are the margin assumptions?”
“Which cells define ‘Active Customers’?”
Then you use that documentation to decide:
Which calculations stay in Excel (prototype)
Which become Power BI measures (production)
This is where jobs get saved: teams that don’t formalize metrics end up with “multiple truths.”
Gold prompt:
Identify every business metric in this workbook (KPI list). For each: definition, formula logic, source tab/range, filters/assumptions, and common failure modes. Output a KPI registry suitable for Power BI governance.
3) Reporting architecture: Excel becomes a source or staging layer
There are a few common patterns:
Pattern A — Excel as a source (quick & dirty)
Power BI imports Excel tables from OneDrive/SharePoint.
Good for: small teams, fast iteration.
Risk: “someone changed the sheet” breaks refresh.
Pattern B — Excel as staging, but data ultimately comes from systems
Excel/Claude used to prototype logic and validate numbers.
Production data comes from SQL/dataflows/lakehouse.
Best for: scale, stability, governance.
Pattern C — Excel outputs feed Power BI (avoid this if you can)
Using Excel-calculated outputs as “truth” is fragile.
Better: replicate the logic in Power BI/DAX once stable.
Claude helps you decide which pattern fits by identifying volatility: hard-coded assumptions, manual steps, and places refresh would fail.
Gold prompt:
Audit this workbook for Power BI refresh risk. List anything that will break refresh or governance (manual steps, hard-coded inputs, volatile ranges, pivot dependencies). Recommend the best Power BI ingestion pattern (A/B/C) and why.
4) Change management: Claude accelerates iteration, Power BI enforces discipline
Claude speeds up “what-if” changes (assumption updates, scenarios). But Power BI wants stable definitions.
A useful workflow is:
Use Claude/Excel for scenario exploration
Publish Power BI with base-case metrics + controlled parameters (what-if parameters in Power BI)
Keep scenario work in Excel, but keep official reporting in Power BI
Gold prompt:
Create a scenario manager in Excel: base case + 3 scenarios, with a clear Inputs tab and output summary table. Structure outputs so Power BI can easily ingest scenario results as a fact table (Scenario, Date, KPI, Value).
The big risks when Claude + Excel touches Power BI
Risk 1: “We operationalized a prototype”
If Claude helps build a clever model fast, teams can accidentally treat it as production without governance. Fix: promote only what’s stable into Power BI measures/datasets.
Risk 2: “A spreadsheet becomes an API”
If Power BI refresh depends on a workbook that humans keep editing, you get refresh failures and silent definition drift. Fix: lock down the source, separate “data tables” from “presentation sheets,” and use controlled inputs.
Risk 3: Data security and leakage
Claude-in-Excel add-ins may transmit workbook data to a vendor service (depending on configuration/org controls). That changes your risk model. Fix: define what data is allowed in AI-assisted workbooks, and keep sensitive datasets in governed storage feeding Power BI.
What “good” looks like (the maturity ladder)
Week 1: Claude helps clean data and explain models; Power BI imports clean tables.
Month 1: KPI registry exists; Power BI semantic model aligns to agreed definitions.
Quarter 1: System-of-record feeds Power BI; Excel used for prototyping, not truth.
Ongoing: Excel scenarios + Power BI production metrics; both auditable.
If you want a concrete playbook
Tell me which environment you’re in:
Mostly Excel + manual refresh
Power BI on top of Excel in SharePoint
Power BI on top of SQL/data warehouse
Mixed / messy
…and whether you’re BA or PM. I’ll give you:
a recommended architecture pattern (A/B/C),
a “Power BI-ready Excel” checklist,
and a prompt pack specifically for that setup.
Let’s keep this knowledge development going in the comments.
Here’s Exactly What This Means for Your Business (and Why You Should Care Today)
AIWithPeter • February 18, 2026 • Toronto
Yesterday, February 17, 2026, Anthropic released Claude Sonnet 4.6 — and made it the default model for every free and Pro user on claude.ai and Claude Cowork.
This isn’t just another incremental update. It’s the moment “frontier-level” reasoning became accessible at mid-tier pricing and scale.
Performance that required the premium Opus tier two weeks ago is now available to millions of users at no extra cost.
For business leaders, this is a flashing red signal (and a massive green opportunity): the cost-performance curve of AI just bent sharply in your favor.
See this special report curating some in the field use cases I have developed for my clients:
Sonnet 4.6 is a full upgrade across six critical dimensions that matter to enterprises:
Coding & Agentic Development — Complex code fixes, large codebase navigation, bug detection, and production-ready solutions. Users in Claude Code testing preferred it over Sonnet 4.5 ~70% of the time and over Opus 4.5 ~59% of the time. It compresses multi-day projects into hours with higher consistency and fewer hallucinations.
Computer Use — Human-level interaction with spreadsheets, web forms, and desktop apps. It scored 94% on Anthropic’s insurance benchmark (submission intake + first notice of loss workflows) — the highest of any model tested. No more clunky bespoke connectors for basic browser automation.
Long-Context Reasoning — 1 million token context window in beta on the API (enough for entire codebases, 500-page contracts, or dozens of research papers). Context compaction automatically summarizes older tokens so agents stay coherent over long sessions.
Agent Planning & Orchestration — Better long-horizon planning (see Vending-Bench Arena results where it invests early then pivots to profitability). Excels at branched, multi-step enterprise workflows: contract routing, conditional templates, CRM coordination.
Knowledge Work & Document Intelligence — Matches Opus 4.6 on OfficeQA (enterprise PDFs, charts, tables). 15 percentage point jump over Sonnet 4.5 on heavy reasoning Q&A over real Box enterprise documents. Stronger financial analysis and recall.
Design & Visual Output — Fewer iterations needed for polished frontend code, data reports, layouts, and animations.
Pricing stays the same as Sonnet 4.5: $3 / $15 per million tokens on API. No increase for free/Pro users. Available immediately on AWS Bedrock, Google Vertex AI, Microsoft Foundry, Snowflake Cortex, GitHub Copilot (rolling out now), and Claude in Excel/PowerPoint.
Opus 4.6 (released Feb 5) is still the absolute smartest for the most extreme long-running agents, but Sonnet 4.6 closes the gap so dramatically that many heavy Opus users will now default to Sonnet for 80-90% of workloads.
The Business Context: This Is the Democratization Moment
Think about what this actually unlocks in 2026 economic reality:
1. Engineering & Product Teams Your dev velocity just got a permanent multiplier. Agentic coding at scale with near-Opus quality at Sonnet cost means smaller teams can maintain larger codebases, ship faster, and reduce technical debt. Early testers report production-ready solutions where previous models required heavy human review. For CTOs facing talent shortages, this is the closest thing to “cloning” senior engineers you’ll see this year.
2. Knowledge Workers & Operations Finance, legal, compliance, and analysis teams can now process contracts, spreadsheets, and research at a scale previously impossible. Claude in Excel with MCP connectors (S&P Global, LSEG, PitchBook, Moody’s, etc.) lets analysts pull live external data and reason inside the spreadsheet without leaving it. The 1M context + computer use combo turns multi-hour manual processes into minutes.
3. Enterprise Automation & Agents This is the real game-changer. Sonnet 4.6 was explicitly tuned for agentic workloads: lead agents + sub-agents, tool use, memory, programmatic calling — all generally available. Insurance companies can automate claims intake. Telcos (see Anthropic’s new Infosys partnership) can build governed agents for regulated workflows. The performance-to-cost ratio makes high-volume agent deployment economically viable for the first time.
4. Cost Structure Realignment If your organization was hesitating on Opus because of token spend, that hesitation just evaporated for most use cases. You can now give every employee frontier AI without blowing the budget. Free tier upgrades (file creation, connectors, skills, compaction) further lower the barrier for pilots and experimentation.
Market Signal Check Software stocks have been under pressure precisely because models like this keep proving AI can eat more and more white-collar and coding work. Yesterday’s release contributed to another dip in the IGV ETF. Smart leaders aren’t fighting this trend — they’re positioning their companies to ride it.
What’s In It For YOU — Your 30-Day Action Plan
Here’s the practical framework I’m recommending to AIWithPeter subscribers (executives, CTOs, Heads of AI, and transformation leads):
Week 1: Assess & Baseline
Switch your team’s default to Sonnet 4.6 on claude.ai today.
Run your top 5 most painful workflows (code review, contract analysis, financial reporting, spreadsheet automation, customer support triage) against both Sonnet 4.5 and 4.6. Measure time saved and quality.
Check API usage logs — identify high-volume Sonnet 4.5 calls that can stay on 4.6.
Week 2: Pilot Agents
Build one internal agent in Claude Code or via API (e.g., weekly financial pack generator, PRD-to-prototype flow, compliance checklist router).
Test Claude in Excel with real connectors.
Measure ROI in hours saved vs. token cost (you’ll be shocked).
Week 3-4: Scale & Govern
Roll out to key departments via Team/Enterprise plan.
Implement Anthropic’s strong safety features (prompt injection resistance now Opus-level).
Set usage policies and review quarterly.
Expected outcomes I’m seeing from early enterprise adopters: 30-60% reduction in routine knowledge work time, 2-5x faster prototyping, and agent deployments that pay for themselves in weeks.
Risks to Watch (Be a Responsible Leader)
Over-reliance without human oversight on high-stakes decisions (Sonnet 4.6 is safer than predecessors but still an LLM).
Data governance — use enterprise plans with proper controls.
Change management — your people will need prompting training to extract maximum value.
Anthropic continues to lead on safety evaluations. The model passed extensive testing with “no signs of major concerns around high-stakes misalignment.”
How to Get Started Right Now
Go to claude.ai — Sonnet 4.6 is already your default.
Developers: Use claude-sonnet-4-6 on the API, Bedrock, Vertex AI, or Foundry.
The AI arms race isn’t slowing down — it’s accelerating. But for the first time, the best tools aren’t locked behind the highest price tags. Frontier intelligence is now the default experience for millions.
The question for every business leader reading this isn’t “Should we use AI?” It’s “How fast can we rewire our operations around models that are this capable, this affordable, and this accessible?”
Sonnet 4.6 just handed you the keys.
Try it today. Measure the difference this week. Then let’s talk about what you built — reply to this email or DM me on X @Peter_Sigurdson.
The future isn’t coming. It’s already in your browser.
Stay ahead, Peter Sigurdson Founder, AIWithPeter Toronto, Canada
P.S. If you’re not already subscribed, join 12,000+ executives getting the unfiltered weekly briefing on what actually moves the needle for business. Follow me to keep ahead of the curve.
What’s your first test case for Sonnet 4.6? Drop it in the comments — I read every one.