Sam Middleton Beattie

Product & Growth

← Back to Writing
Case Study

My first production code — an automated file renamer for Google Drive

Feb 25, 2026

3 months ago, before the most recent AI releases, I felt like I'd been falling behind a bit. With Pulss, I'd been free to go deep into MCPs and capabilities that had previously been out of reach. But in a full time role, there was understandably much less time for experimenting. So when the chance came, I went all-in on actually building something.

I figured that, even if I failed, it would be a good learning experience all the same.


An automated naming system for Google Drive files

Like everyone else in late 2025, I was exposed to crazy LinkedIn posts about people vibe-coding entire startups, or building automations on n8n that could automate multi-million dollar business functions in a matter of days.

Particularly at that moment in time, this felt like all click-bait and very little substance.

I'd already done some cool stuff with AI. I was a keen MCP user, turning to Claude to interact with databases for analyses that would have taken me weeks or been out of reach entirely. I'd also set up a nice workflow where I could work with Claude on projects and push content to Notion. This saved a lot of time, particularly when creating databases in Notion, which I'd always found to be time consuming and often messy.

It was the latter that got me involved in this project. I was working alongside a large creative team, who produced lots of images and videos but struggled to maintain a naming structure — particularly because it feeds into other business systems. They wanted a Notion database with charts on top that could measure output and keep file naming in check.

What seemed like a simple request quickly grew to require an approach that automatically renamed all files within a massive Google Drive folder structure. I stayed onboard — it was a chance to really push what I could do with AI, at a time when I wasn't getting many of those.


Choosing the tools

The first thing I did was drop Notion. I'd built a proof of concept there, with a database and accompanying charts, but with the team working entirely in Google Drive, it made no sense to involve another tool.

I evaluated n8n, but was wary about using it for production. I'd watched various YouTube tutorials and set up some working automations before, but these always seemed to overpromise and underdeliver. In my personal experience, n8n flows were brittle in a way I couldn't quite put my finger on.

Apps Script made sense because it was part of Google Workspace. No external tools, no additional access to configure, and it could talk to Drive and Sheets natively. I'd never used Apps Script or any JavaScript beyond a handful of coding challenges, but this was a chance to push myself and Claude to the limit.


The problem

The team was dealing with hundreds of creative files spread across deeply nested Google Drive folders. Files came in with names like final_v3_REAL.psd, banner copy (2).png, or just unnamed exports.

They were creating a new naming convention, but it was time consuming for owners who were being pushed for faster output. But file names also fed into vital data systems, so having files that didn't follow the proper naming convention could have major consequences downstream.


Building it with Claude

This was my first real project built through conversation with an AI. I didn't write code in the traditional sense — I described what I needed, Claude generated it, I tested it, found where it broke, described the problem, and iterated.

The core idea was simple: the folder structure already encodes metadata. A file sitting in a deeply nested folder path gets renamed based on each level of the path. The system reads the folder path, extracts each level, and builds the new name. The last part is what the owner adds themselves when they upload the file — a unique identifier for their work. Owners upload to the right folder, add their identifier, and the system does the rest.

From proof of concept to production tool

For the first version, I spun up a mock folder structure in my own Google Drive and played around adding photos of my dog and seeing if they would be renamed. It seemed to work.

Moving it into production quickly taught me that getting this right would take more iterations — and more back-and-forth with the team — than I'd expected. Each version taught me something, and not always about code.

The original approach was recursive folder scanning, with the script walking up the tree to find changes. In a massive folder structure, this quickly becomes unviable. Apps Script times out after 6 minutes, and with new files being added every hour, this approach wouldn't work.

I first tried caching, which improved run times. However, it quickly became clear that caching was vulnerable to human error. Say an owner put a file in the wrong folder — it would get renamed based on the wrong path, and even when moved, it would retain the incorrect name.

Getting it right required a completely different approach — the Drive Activity API. I'd initially opted against it because it didn't seem like it would be rigorous enough to catch all the uploads. But instead of scanning folders, it asks Google "what files had activity recently?" and only processes those. That solved the core scaling problem.

It came with its own challenges though. The API would flag all activity, and was limited in how you could control it. The search kept timing out, which I eventually discovered was because it was scanning every file shared with my user — not just the folder my script worked on.

The solution was to severely limit the scope. I set up triggers to check activity every 15 minutes with a 30 minute lookback window, so each run overlapped with the last. Then I moved the script to a dedicated automation account with very strictly defined access. When the script ran from that account, it could only see the files it was designed to work on.

Designing for humans

Human error and edge cases could never be fully eliminated. That's why I set up error logging and email alerts for the team lead — flagging when the renamer ran into problems, when it timed out, or when owners had uploaded to the wrong folder or without the required inputs.

The pragmatic answer I landed on for most edge cases: validate inputs, flag clear errors, and rename what you can. A file with a timestamped placeholder name is better than a file with no name at all.


The result

The script ran every 15 minutes on a scheduled trigger. It processes files in 2-5 minutes per run, logs every rename with old name, new name, and folder path, and the team can now find any asset in seconds.

It's "just" an Apps Script. But it solved a real problem for a real team, and it's been running quietly in the background ever since.


Coming back to it

Months later, I came back to this project to write this case study. Barely three months, but a lot has changed.

The original was built through chat conversations with Claude. Long back-and-forth threads where I'd describe a problem, get code back, paste it into the Apps Script editor, test it, find issues, and go again. It worked, but it was slow and I was always context-switching between the conversation and the editor.

Now, not only am I more confident with Claude Code, local files and git, but Claude Code and Opus 4.6 are in a completely different league. I've been able to rebuild this entire project from memory, recreating it in an anonymised demo version with full documentation, a one-click setup, and improvements I wouldn't have known to make the first time around. Code quality checks, security reviews, performance optimisations — all things that were beyond me three months ago.


What I learned

This project was my first foray into solo building something that other people actually use. For experienced engineers, this might not seem like much. But for me, it changed things — going from a proof of concept I wasn't sure I could build, to a public GitHub repo.

Claude was doing most of the heavy lifting technically — I won't pretend otherwise. But I'm proud of how I researched, designed the solution, iterated, chose the tools, and just threw myself into it. It was a real "I can do this" moment, but it didn't feel like AI did it all. I didn't feel like my job was being taken, more that I was enabled to solve a genuine problem and apply the best of my skills — the product thinking, the research, the iteration — alongside a capable colleague.

Building with AI isn't magic. It's a lot of iteration, a lot of "that didn't work, let's try this instead," and learning to ask better questions over time. But it means that I — with no technical background — can build real tools that solve real problems. Whatever happens next, I think that's pretty cool.

Full source code and technical documentation on GitHub →

If you want to follow along, leave your email below. No spam, just new posts when they're ready.